Hacker News new | past | comments | ask | show | jobs | submit login
Face ID and Touch ID for the Web (developer.apple.com)
647 points by gok on June 24, 2020 | hide | past | favorite | 270 comments



So roughly speaking this is WebAuthn for a web site, with the iphone acting as the dongle.

It's a really good idea. I can see there being a big demand for just simplifying signin - I can easily see a time where it is worth not having the hassle of managing multiple signin processes and just choosing webauth or nothing.

Edit: to be clear this won't affect B2C sites whose monetisation is based on getting as many people as possible. But imagine a saas business charging upwards of 100 bucks a month - they have something valuable to protect, and the security story just got a lot easier


Webauthn actually fully supports this model as "platform authenticators", ie hardware security modules built into the client system. You see this on the windows side too where "Windows Hello" integrates with the TPM and acts as a platform authenticator as well.

No need to speak roughly.


Yup. A site can even say "I want a platform authenticator" or "I specifically don't want a platform authenticator" during registration using the Javascript API.

Most sites should just not care, but it's an option if you've determined there's a specific reason it matters in your application.


> A site can even say "I want a platform authenticator" or "I specifically don't want a platform authenticator" during registration using the Javascript API.

Websites should not depend on JavaScript for something that should be able to be done declaratively.

(Amongst other things - we shouldn't need to use `fetch`/`XMLHttpRequest` when a <form> would work just-as-well - but if only <form> let us use more than just GET and POST, and supported more types of serialization, and supported asynchronous form submission - and bring back <keygen>!).


* you don't need the TPM for Windows Hello to act as your security key. I can't enable BitLocker because there's no TPM yet I have Hello enrolled as a key for GH.


I’m not sure if that makes me happy or sad to hear..


You can use BitLocker with just a password, not sure why you're implying you need a TPM.


https://www.howtogeek.com/237232/what-is-a-tpm-and-why-does-...

And yes, there is a way to use it without a TPM technically, but it's not accessible by the computer's management GUIs, and you need to create custom GPOs and apply them.


Is this how passwords.google.com prompts me with my Android phone lock when I want to access a password in plaintext? I've always wondered that.


I think that's a custom pathway but built on the same primitives. That being said, I'm not 100% on that.


ios/macos already let you copy/paste between them if you have bluetooth enabled on both sides and are signed in the same icloud account on both. I wonder if they're planning to let me authenticate/webauthn on macos using my phone as a platform authenticator?


The magical cross device copy and paste is one of my favorite unsung features.


Unfortunately it’s been a little finicky for me, but when it works, it’s a great tool.


It works 100% of the time when I have both WiFi and Bluetooth on, when do you face problems?


It doesn't work if both devices are connected to different WiFi networks. But Airdrop works without being connected to any AP…


Yes, either you are both connected to the same WiFi network or both of you are not, kind of weird


Possibly - there is functionality to expose an authenticator over bluetooth or NFC, both of which would work for an iPhone or watch to act for an authenticator even into a windows laptop.

There are UX issues there that have to be solved yet. Also, while I haven't gotten my hands deep in what Apple has done yet I suspect this is a browser-level feature and not a platform-level feature, where say your phone would pop up a "do you want to authenticate" screen when held near another computer with BLE the same way you get "do you want to use Apple Pay" today.

Platform level support would allow me to use this directly within other applications and browsers, rather than just from Safari and the SFSafariViewController/ASWebAuthenticationSession Safari-managed views.


Can someone explain how does it work behind the scenes?

As I see it, data is stored in a secure enclave separate from the processor and OS. It only provides a match value based on the generated digital private key. So phones will create a separate pair for every site and send the public key with credentials so the server can verify it with the device.

Wouldn't apple have to open source their hardware as well?


This uses WebAuthn (https://webauthn.guide - https://webauthn.io). The video has a great description starting at 1:40.

Here, the iPhone functions as a platform authenticator, like a U2F usb/nfc key would. Why would they need to open source their hardware for this?


I was talking about face ID and how apple handles the data using secure enclave.


Does this mean integration with SaaS IDPs?


If they support WebAuthn it should.


FYI, fingerprint for WebAuthn already worked on Android, I tried it with my pixel phone.


It'd be really nice to see this working with iCloud Keychain at some point too - so you can have FIDO working as a secure platform thing but with syncing and dealing with the problem of effectively losing your keys over time etc


Having WebAuthn work synchronized across devices with iCloud would be technically feasible, but it would not be FIDO certifiable which requires the private keys to stay on device.

Typically deploying WebAuthn means that you have to put more thought into your account recovery flows - you don't want to deploy strong authentication to fall back to email or KBA-based recovery, for example. Then, this recovery may be used both to recover from a lost device as well as to just add a second device.


WebAuthn is generally about device authentication with credentials that can’t leave the device, though that could change depending on where and how the hardware gets/stores it’s tokens. Or if you rely on a third-party, like Apple, to store the tokens for you and use OAuth with an mfa indicator in the attestation?

General advice: If worried about losing a device, try to register more than one. Even iCloud Keychain requires other hardware for authentication... same problem applies.

Only way out is having a backup like taking ID to an Apple Store as a way to regain access... that varies right now by provider, but who knows. Maybe Login with Apple will go WebAuthn-compatible in future? (Haven’t watched this video yet.)

If you’re an enterprise and worried about key authenticity or varying WebAuthN standards, you can look for specific types of keys or even request specific serial numbers of FIDO2 dongles from the web browser, etc.


Not really a problem for me - I keep a set of six CTAP2 keys registered on everything with careful labelling etc.

But for normal people, we do need to get more of the balance into the usability side I think. The thing with iCloud Keychain is it can comfortably be recovered without breaking the end-to-end encryption with only a single remaining device, and many Apple users have as many as 3-4 devices in the circle of trust

It seems ideally some kind of "roaming platform" additional option would be good in the webauthn standard


I agree, but it sounds like we’re trying to get the web browser to simplify and implement OAuth2 and OpenID Connect via WebAuthn ... If we already have OpenID Connect, the only advantage to end users under that scenario is a login-with-Apple ease-of-use improvement. Seems more likely that we’ll continue using OAuth2 and OIDC server side for this, for now... but maybe we’ll end up standardizing the ways MFA is requested and presented by providers...


I can't find docs - but I assume one can authenticate with only their laptop. Most new ones have fingerprint already and likely they will bring FaceID to mac os soon


Is it webauthn or is it once again proprietary tech?


The video explicitly mentioned it is WebAuthn Platform Authenticator (1:54)[1].

[1]: https://www.w3.org/TR/webauthn/#platform-authenticators


It's webauthn from what I can see.


This really is just bringing WebAuthn to Safari. I've been using it via chrome w/ TouchID for our corporate okta SSO and it has been working great.

WebAuthn is really just a way to make public-key/private-key crypto scale. The user never really knows about or interacts with the keys.

The website doesn't store a password, they store a public key.

The user doesn't know about the private key (paired to the public key) they just know how to unlock the private key via the yubikey or biometric device.

The site sends some data to the user to sign, and they do so with the private key, then the site verifies the data was signed by the private key, boom: authenticated.


How does this work if you need to sign in to a site on a borrowed computer while traveling or something? Is the private key derivable from a master password or something?


Apple already kind-of supports this for iClout login. If you want to authenticate on a new device, you’ll get a prompt on one of your already authenticated devices to confirm the login. You still need the password, but it’s really not necessary... if there was also some additional “privilege” system (i.e. that the same user account would have less privileges if logged in through a less secure method) it would be even safer.


I'm surprised nobody has mentioned this but you can also use a hardware FIDO2 token for Webauthn. Like a Yubikey. This is ideal if you use many computers.

If you need to sign in on a borrowed computer you visit the site, it'll ask you to insert the yubikey (if not already), you enter the PIN code and touch it, that's it.

I believe a yubikey with fingerprint instead of PIN (or additionally? Information is scarce on it) is also coming.


A few options if you want to maintain MFA:

For my Okta account, they support push notification to their app on my phone as an alternative authentication measure.

Another approach is a bluetooth (or NFC) enabled device (like your phone) that actually caries the private key. When using a borrowed computer, the signed attestation data is shared with the borrowed system, but that can only be used once, since the private key is never transmitted.

So basically you can either have a side-channel to another MFA method, or a relay where you don't need to trust the intermediary beyond the current session.


Similar to how you would sign in on devices/browsers without WebAuthn support or don't have physical token (Yubikey/etc.) with you: use a fallback method provided by a website. This is usually TOTP or a scratch code. The website need to implement it though.

One thing to keep in mind is this is not supposed to be the only factor required to sign in. It should be used as a 2nd factor in the similar way to TOTP (but with much better usability).


With Okta (since gp mentioned it), you would sign in with a password, then click the "yes its me" button on your phone, or you could use TOTP, or even a yubikey.


Okta is more of a SAML/OpenID Connect thing with built-in multi factor authentication than a replacement for WebAuthn, though. Okta could embrace WebAuthn Platform Authenticator as one of their authenticating factor if user is unwilling to install an app, but a website isn't expected to use Okta as a second factor in their authentication flow.


OP is referring to how you already can sign in to Okta as a website, not as an authentication mechanism. You can create a sign in flow that goes like so:

username -> password -> [ WebAuthn | Okta Verify Push ]

This approach can be used on any website, just with a regular TOTP code in place of the proprietary Okta Verify Push.


Ah I didn't know you could do the Okta push without using their whole SSO suite. Thank you! I guess in this scenario Okta acts more like Authy proprietary OTP thing then.


Web Authentication does support being the only authentication mechanism, since it is in itself multi factor (in this case, physical possession of the phone and biometric confirmation)


It's mentioned in the video as well that website should always use this as faster sign in option with providing other options alongside.


To think we could have had this a decade+ ago with TLS client certificates, if web browsers weren't perpetually stuck in the past.


You have the blame misattributed: almost nobody used client certificates because they cost money ($100+/year). That meant there was little demand outside of a few spaces like government and absent usage there was not much pressure on the UI improvements.

Client certificates are also worse for privacy and phishing resistance: with a certificate, if I can convince you to click on a link I get your identity. From the site's perspective, I don't have any way to tell whether the person with the certificate is the same person I saw or the person who compromised their computer or convinced a CA to issue a cert for someone else. Requiring key storage to be on a hardware enclave significantly reduces that risk, allows for the stronger attestation requirements mentioned, and also means that you're changing things from “trust anyone who can get a CA certificate” to “trust anyone who can do signatures from a previously-registered hardware key”.


Cost money? Once upon a time, html had <keygen>


How would you do the signing part though? Given that most CA don't do client certificate at all, nor it issue certificates with signing flags.

Even if CA does sign client certificate, and website is expected to store its public key, it expose some privacy concerns since a public key is now Personally Identifiable. If a website must provide its own self-signed CA and require user to provide a CSR when registering for an account, then it becomes a huge overhead for both website and user to just login.

I work with enterprise and the requirement usually requires client certificate. I never had a good experience setting it up even with a limited number of parties.


You don’t need a public CA to do client certificate authentication. Hell, your computer doesn’t even need to trust the CA that signed the key - it’s the server on the other end that cares about that.

This is precisely how WebAuthN works - but we figured out that we actually don’t need to go through the headache of getting CAs and signing involved at all. Just store a public key attached to a user after they’ve signed in via traditional means and let the browser/security token manage the keys.


Client certificates suck in a bunch of ways that WebAuthn, specifically designed to solve this problem, does not. Example:

If the certificate used to sign into Hacker News as "sneak" is also used to sign into PornHub then I can correlate that to discern that "sneak" on HN uses PornHub. Whereas you can't do that with WebAuthn credentials - a separate credential is spun up for every single registration, it's completely useless everywhere except the one site it was issued for and can't be correlated to other credentials except via a cryptographic attack on the underlying primitives (ie breaking WebAuthn itself).


There's no reason a browser couldn't have generated a new self-signed client certificate for each site, though; the fact that they don't offer that as an option is just a browser design decision.


...which means a login is now tied to a browser, and you have to come up with a way to securely export or sync private key (bad idea?) or a way to link another browser to a login. Also user losing access to all their websites by accidentally uninstalling a browser doesn't sounds very user friendly.


This is exactly the same situation as software WebAuthn keys, if they are used as a single factor.


There was actually a <keygen> html tag used within <form>s to generate a keypair what was then supposed to be signed by the server and finally returned to the browser for local installation. At least that's how I understand it. It's been deprecated for a while now.


Oh wow, yeah - I didn't realize that existed, and now I'm sad it never caught on. :(


I think some unlikely security scenarios are neglected though. You trust you platform to keep your private keys safe and it requires you to trust is. A vulnerability here would compromise your whole identity if private keys can be extracted. I don't yet believe that to be impossible. Will solve common problems like fishing though.


It probably isn't impossible, but it is made much harder with use of separate hardware, be that a physical key or secure enclave. A fully compromised laptop still can't get the private keys in a "perfect" system. Of course, the hardware might have design defects, or some debug feature to get private keys that shouldn't have been shipped in production build, or government mandated backdoor, etc ... But it now requires a compromise of both the client system AND their hardware keystore.


You should pay special attention to the section about attestation, which is not something that is done in a privacy-focused way without an anonymous attestation authority (which is part of the iOS 14 feature)


One of the reasons this is so important is that it makes it far harder to phish people. The webauthn APIs include the web origin in the authentication process, so it’s not possible to use something like modlishka to phish people. If we use magic links or something similar like a QR code for adding new devices, or more users also have roaming authenticators with PINs, webauthn will massively massively reduce phishing success where it’s implemented, as it’s pretty easy to make website that doesn’t have phishable credentials at all.


Interesting, Apple is letting you change your default web browser with this new iOS version, but also adding Face ID and Touch ID to Safari.

Why would anyone want to build these features if they're so platform / browser specific? Does anyone know if these auth features might work on other browsers on iPhone?


There isn't a reason it wouldn't work - the browsers all use the same engine anyway.


This isn't really true any more. Apple requires their competitors' browser apps to use a 'webview' to display websites, and Safari does not use this. The iOS webview may share a layout/paint engine with Safari, but it is heavily restricted in other ways. Apps with webviews (like Chrome for iOS) can't have extensions, for example. But the subtlest, cleverest restriction is that webviews are forced to use an older and slower JavaScript engine. This ensures that websites feel a little slower in Apple's competitors' browsers. Another major upshot of this is non-Apple browsers can't offer the same modern JavaScript APIs to websites as Safari can. For example, last time I checked, 'getUserMedia' was not available on iOS outside Safari, meaning non-Apple browsers can't support web-based video conferencing tools like Jitsi. This also implies that websites running in non-Apple browsers won't necessarily have access to the APIs necessary request Face ID sign-in, unless Apple decides it is in their interests to make this available to webviews.


> But the subtlest, cleverest restriction is that webviews are forced to use an older and slower JavaScript engine.

AFAIK that hasn't been true for years. WKWebView (as opposed to the old UIWebView) lets you use the full speed JS engine.


Ah, just googled it, you're right, they did eventually make the Nitro engine available in webviews. But my main point is that it's not exactly the same, they restrict certain features, and so it should never be expected that modern functionality will work in webviews just because it works in Safari.


> they restrict certain features, and so it should never be expected that modern functionality will work in webviews just because it works in Safari.

Questioning this because you were just proven wrong once, do you have a source that confirms what you're saying?


Interesting, thank you for the information!


There are some differences between Safari and WKWebView. Some features are blocked.


It's funny how much bashing Google gets for monopoly with Android, pushing users to use Chrome, Play Store and whatnot. While all of that is relevant, Apple's stranglehold seems much more and worse.


at least apple never pretended like iOS was open source


> how much bashing Google gets for monopoly with Android, pushing users to use Chrome, Play Store and whatnot

I’ve never noticed that criticism come up in the wild. The criticism I’ve seen of Android is that it’s primarily a surveillance device with a questionable security model.


You are exactly right. I wish both platforms were more open.


Apple sells between 10-20% of smartphones per quarter[1], that implies Android makes up 80+% and Windows/Blackberry a neglible amount.

How can Apple be a monopolist from such a small position, or have a "stranglehold" when they are outsold 4-8x by the competition?

[1] https://www.statista.com/statistics/216459/global-market-sha...


So you are comparing Android vs iOS. You cannot see how that is Apples Vs Oranges? Just because something is based on Android doesn't make Everything Android vs iOS a direct comparison. Try Google Phones Vs Apple phones or Huawei vs Apple.


It is a direct comparison, because Google control Android.

The poster I replied to compared Google and Apple using the terms "stranglehold" and "monopoly". These terms don't work because Apple don't have control of most of the smartphone market, so they can't be a monopoly. Through Android, Google do control most of the smartphone market, so they might have a monopoly effect on the smartphone market. Apple aren't in any position to strongarm things. Google are.

Therefore whatever Apple does with iOS can't be "worse" from this perspective. It could be worse for users, but it can't be "worse" monopolistically for Apple to affect 20% of the market in Apple's favour, than it is for Google to affect 80% of the market in Google's favour.

Google could strongarm Huawei and Sony and LG and all the other Android manufacturers. Apple couldn't. Google could strongarm 80% of smartphone users, Apple only 20%.


Sure it does. If what I want is not to be blocked by an OS that won’t let me write my own JIT, then any android phone will work. And in that case, there’s a ton of competition to chose from, and the JIT I write can (theoretically) work on any of them.


Monopolies have nothing to with world markets. There is no world government.

Apple has a 49-60% share in the USA. Their next biggest competitor is Samsung with less than 1/2 of that.


Monopolies have nothing much to do with government, they're about being the only provider in a market. If Apple controls less than half, they're not the only provider even within the US, and that only makes it weird for you to say - effectively - only the US matters, other countries don't.

Google controls Android which Samsung use, and the terms on which they're allowed to use it if they still want to allow Google apps. That means it's not Samsung vs Apple, it's Google vs Apple, especially when my reply was to someone comparing Google to Apple, not Samsung to Apple.


It is not Google vs Apple. No court would take that case. Google hardly has any market share of phones (Pixel) and Apple doesn't sell OSes

Apple and Samsung are in direct competition. Apple has >50% of the smartphone market in the USA. Samsung has less then 25%, every one else has even less. Google has < 1%. That Samsung happens to use Android is irrelevant. FWIW Samsung has its own app store and plenty of distinguishing features from every other phone.

The rest of the world has its own markets. If the UK wants to sue Apple or Google for being a monopoly they only care about the UK market, not how it's selling in Indonesia. In other words it doesn't matter one wit if some company has a large market share in the world, monopolies are only enforced in a specific country for that country's market, not the world market. You quote iOS has only have 20%, but as that is a world wide number it's entirely irrelevant and pointless. If iOS had 100% market share in Singapore but only 5% marketshare in the world it would be Singapore suing because of the Singapore market. There is no one who can sue on behalf of the world.


I don't understand why you're trying to pick Samsung out. If Google strongarm Samsung and Motorola and LG and say "you can't put Google apps, Google Maps, Google play store on your phones anymore and can't get Android updates from Google unless you XYZ" that has a greater reach than Samsung alone has, doesn't it? And that is a reach that Google can have, and Samsung, Motorola, LG alone cannot have, right?

And by reaching over multiple Android phone sellers, that has at least a similar, but likely much greater reach than anything Apple can do, doesn't it?

Whether a company can be sued for being a monopoly in a given jurisdiction isn't so interesting to me, as whether they can influence ~80% of the worldwide smartphone market; If they can do that but you can't sue them for being a monopoly in Tuvalu that has no bearing on anything interesting.


"browsers all use the same engine anyway"

Layout engine and even rendering etc. is quite distinct from the other browser features. What goes on in front of your eyes is fairly separate from networking, security all those other nice things.


If Apple lets other browsers using the engine have access to this feature.


It appears to be the WebAuthn standard (based on a quick look at the video so far) and so implementing it would enable you to leverage other Webauthn targets as well, not just on iPhone but on Windows devices and more.


An example is the in-app browser that you see when clicking on a link in instagram, that goes to my online store. The browser is still in the instagram app, so they aren't already logged in to their account on my store. That person can quickly login without typing in an email and password.


Last time I checked, efficient ad blocking worked only in Safari. If that did not change, it's another reason to stay on Safari.


How do you mean? I use Focus and Safari is configured to use Focus as its content blocker. Is there some way Safari uses Focus better than Focus does?


Great to see Apple is focusing on Webauthn this time instead of rolling something themselves that only works inside their walled garden.

Android is doing the same thing: Android phones are now also capable of being a FIDO2 authenticator for Webauthn.

Now what we need is many more sites actually offering it :) I'm already using it for Office 365 but that's really the only one so far that I use that offers it.


Going to have to give serious thought to where I will and won't use this.

There are a lot of implications - no ability to automate and giving others data on you were provably in front of some machine are two big ones.


That's a really interesting point. If this really does allow a web user to prove that a human interacted with the computer, it'd make for a really nice CAPTCHA replacement.


it doesn't do that, since there's no attestation.


Yes there is. The video covers this clearly.


Did you watch the video?


where he says it is "not included"? (because the Apple secure enclave does not attest keys.)

also where he falsely alludes to other attestations being identifying? sure, they can be, but they generally aren't.


From the server side, isn't this just a WebAuth integration?

How does the server know for sure if the client is on an iOS Safari browser on an iPhone with FaceID or a custom browser on any OS and any non-locked-down hardware being run with Selenium?


Attestation. If a website requests it, the device will provide cryptographic proof that you used a specific vendor’s device to store the resident credential. The proof is a certificate signed with a vendor’s secret attestation key.


Can't the key get stolen if it's on the client?


Yes, but it’s probably stored in the Secure Enclave, so it’ll be hard work.


Correct. The key material is stored in the Secure Enclave.


The question is more along the lines of: does this provide more security than passwords for real users?

Stealing a password is probably more easily done than stealing a private key that is never transmitted. The primary threat model is protecting the credentials of real users rather than protecting against fraudulent users (though some considerations have been made for that too).


While what you say is true, it doesn't seem relevant to this thread.


It won't be the only sign in method unless the point is to only allow _that specific device_ to connect.


If anyone wants to deploy this server-side, I made a Django library that's very easy to use:

https://gitlab.com/stavros/django-webauthin

You can see a demo login here: https://www.pastery.net/

It allows the user to log in without a username or a password (untested on any Apple device as I don't have any, please file bugs if it doesn't work).


Hmm, on an iPhone it asked me to hold my authentication device near the top of the phone, it didn’t use faceid at all... :/


Are you on the new OS seed build?


I get the same thing, and I'm on the iOS 14 beta


Doesn't work (yes, on iOS 14).


I dont see a problem with this as part of MFA, but here in the US our fifth amendment protections are pretty lax, and only cover passwords (sometimes). If the police wanted to force your fingerprint or face ID to log into a website (say, maybe a protest message board), they can do it just the same as they can force your blood draw during a DUI with a warrant from a judge.


The only way they can do that is with a warrant and if you own the website.


I got a Yubikey earlier this year and was disappointed by Apple's implementation of WebAuthn it (doesn't work if there's a PIN on the device) but it looks like they're fixing that as well in iOS 14


I'm hoping it works with NFC and not just the lightning one.

Would be nice to have Yubikey replace my password and then allow me to enroll FaceID so I can keep signing back in.


The current Yubikey works with NFC on iOS, it just doesn't work if you add a PIN to it (which Windows Hello requires). They mention that they added PIN support in the changelog


If the web migrates to biometric sensors for authentication, I hope this won't suffer from vendor lock-in. When every new device ships with facial recognition and/or a fingerprint reader, it will be nice to login using my face/fingerprint irrespective of the device I'm on.


This isn't biometric authentication: it's WebAuthn, which is a public-key based system implemented by all major browsers (https://webauthn.io/). This particular implementation uses biometrics to unlock it but other implementations use a contact sensor (Yubikeys) or PIN, and the target website doesn't need to know anything about which particular mechanism was used to unlock the keystore.


This looks like a WebAuthn implementation, in which case it's an established standard and lots of vendors have interchangeable implementations. For example, Windows Hello has been WebAuthn compatible (FIDO2) for a while for sites configured to accept on-device authenticators.


This is an implementation of the Web Authentication API, and is exposed as a platform authenticator. There is no vendor lock-in.


I don't see how this could ever be something not vendor-specific because without this being tied to "Log in with Apple" you're just saying "trust the client."

Maybe that's fine if all you want is to "lock" a sensitive page to people who aren't the device owner but that's pretty limited compared to FaceID to actually log in.


It has no relationship to "Log in with Apple". It's a WebAuthn authenticator.

Almost all web sites should just implement WebAuthn. On a suitable iPhone or Mac users will be able to sign in by touching the sensor or looking at the camera, while on my Pixel phone I touch the fingerprint sensor, on this Linux desktop I touch a Yubico Security Key.

If your site is paranoid that some crazy user will choose a bad WebAuthn authenticator, or deliberately sabotage their own security for some reason, then you can use WebAuthn Attestation to obtain a signed document from the authenticator (yes, over the Web) which proves that it is, for example, an Apple iPhone 25 Super Mega Plus. I don't think you should bother doing that, but you can.


We’d need to have a standard HTML element like <facial-rec> or something and let the browser handle mapping it to whatever specific hardware the device is using


The issue is that it's just a token that would simply say Passed/Fail. So it's trusting the client/browser.


It's more than that -- from what I'm seeing, it looks like it's a cryptographic token, presumably signed by a certificate that's embedded somewhere inaccessible in the device (probably in the secure element).


If the token is signed you could validate it with Apple (or the vendor that implemented the face recognition on the device, eg Samsung, Nokia, pinephone etc).

You just need an open standard, you could even embed the url of the validating api in the token, so anyone could create their own Face ID provider.


That's precisely what the attestation section of the talk describes. This is all part of the WebAuthentication standard.


So for the non-developer, is this basically as if any participating website could send an 2FA request to your iPhone (like the role of the SMS code but more secure), have the iPhone verify you by face/touch, and then confirm back to the website and let you in? (also like Google does with their app?). Or even take the place of a password?


Done the way Apple describes it this (optionally) replaces the entire login sequence. Username, password, second factor, all replaced by tapping one button.

When you register for this feature, your Apple device gives the site an ID (a really huge random-looking number) and a public key. The site associates this ID and key with your user.

On subsequent visits you do something (e.g. press "Face Sign In" button on the site) and then your device authenticates you (with Face ID/ Touch ID) and if you pass it sends the same ID, and some signed stuff about this authentication. The site matches the ID, finds your user, checks the signed stuff is OK with the public key is stored before, and you're in - no other steps and very secure.

WebAuthn can do simpler journeys that only replace the second factor or as a single factor but with the user still needing to provide their username first, but Apple is pushing developers to have this lovely journey that suits the Face ID/ Touch ID experience on an Apple product.


Sometimes it feels like I'm the only one that doesn't trust biometric sensors. I mean it's basically magic. If for any reason the face detection algorithm doesnt match your face anymore you have the equivalent problem of losing the master password to your entire password store.


It's a little different, I think.

You could have it set up where your face is the one-and-only thing that identifies you, but that doesn't seem to be the case in practice.

Instead, we have the multipart authentication used in many places today: something you know (password), something you are (biometric fingerprint/face), and something you have (your physical device, your email account, your phone number).

Any one of these has downsides (stolen password, biometric misidentification or duplication, redirected phone number) but in combination with the others makes it much harder to circumvent authentication.

Almost all systems have some kind of fallback that rely on a 'something you know' like a master password, and can optionally only be changed if you have other authentication methods (like physically having the device in your hands).

Having multipart authentication allows for a better user experience (look at your phone and it unlocks) with an acceptable amount of risk (you have to have your phone and be you in order to unlock it), with systems to fallback to if something fails (get the super secret password off that slip of paper you hid in your mother-in-laws garden shed behind the loose brick in the wall). The typcial authentication flows are both more secure and more convenient, and the user is responsible for the security of the backup.


Forgive my ignorance: isn't this solved by fallbacks, at least on the iPhone/MacBook itself (don't know about webauthn)?

If my finger is rejected multiple times, I am asked for the password instead.


Linking biometrics to cryptographic authentication is difficult. When I worked on this problem about 5-6 years ago somewhere else, it came down to adding a separate applet for the biometric verification to the secure element, which then authenticated itself to the user authN applet, which was then authorized to generate the user authentication cryptogram.

Biometrics are probabilistic samples of data, where cryptographic verification requires deterministic inputs. They are apples/oranges and that's what made this hard, so you need a connector for them. The attack on such a scheme means spoofing the biometric authenticator's validation message to the cryptographic authenticator, which, if this all occurs between applets on the same secure element, raises the bar for attacks.


There are ways to use error-correcting codes to turn probabilistic samples into deterministic ones and mix them with a key. These are constructed so that you gain no information if you only know the key, or only know the biometric. See https://en.wikipedia.org/wiki/Fuzzy_extractor.


Great papers linked from that. At the time, it seemed the sensor APIs didn't provide a key with sufficient entropy to be considered a cryptographic verification for the tokens we were using. We didn't take it further because the entropy of the biometric sample was going to be diminished by using it to generate a static key, even though the papers linked show you can compose a key with error correcting codes, and intuitively, you could use some kind of probabilistic filter to verify a set of bits in the sample via shamir secret sharing to provide a threshold for a key. It still relies on separate security domains between the biometric sample collection and the cryptographic verifier, and we'd be deep in to the engineering over the differences, but these papers are really useful for getting into the details. One of the early ones was a good intro to it: http://people.csail.mit.edu/madhu/papers/2002/ari-journ.pdf


A fingerprint can be a personal password or it can be a government ID, but it can’t be both. Since the U.S. government already has something like 200 million fingerprints on file, and many foreign governments collect fingerprints whenever you travel, these fingerprints are sometimes leaked en masse (https://en.wikipedia.org/wiki/Office_of_Personnel_Management...), and because they can never be changed, biometrics aren't my personal choice for a secure method of authentication.


But I think you missed the point about the second factor, because there are really 2 factors here:

1. Something you have (e.g. your phone, in this case the Secure Enclave that stores the private key).

2. Something you 'know', e.g. your fingerprint.

Just having the fingerprint itself is not sufficient.


That's a good point that it is more nuanced. The issue I think is that organized crime and unscrupulous governments are getting better at connecting these things so they are not as cleanly separated as they have been in the past. Just look at China. Essentially spyware and viruses are installed at checkpoints on people's phones and biometric tracking is becoming very commonplace. I don't think it will be long before organized crime begins to get better at this too. As such, being able to change that "something you know" is a very powerful countermeasure.


This is key. I just hope this never changes and we never end up in a situation where the fingerprint is all that is used.

Because for most non technical user, my bank app asking my fingerprint to unlock the app is exactly that. They don't really get the difference.


In addition, fingerprints and faces are particularly bad biometrics for keeping secrets, because attackers can get hold of them without you noticing.

E.g. anyone can take a picture of you in a crowd, or lift your fingerprints if they can get to somewhere you've been. But they can't do a surreptitious retinal scan on you.


FaceID is one of the most important technology invented. Imagine if Apple somehow was open to license it and make it universal like Bill Gates wanted to make PCs universal. It can make driver licenses, passports, credit cards, student IDs, insurance IDs - all of these can be things of past. You go to grocery store and simply pay using your face. You want to pay online? Simply look at FaceID device that is now standard with all new computers sold. This is 100s of billions worth of business to make so much legacy gone. Security and identity is now (hopefully) solved but unfortunately still all behind walled garden.


Why did fingerprinting never catch on in that way? That tech has existed for ages. Way cheaper to implement as well. Seems like it would also be less prone to false positives compared to facial recognition, but that’s just speculation on my part.


Touch ID has been hacked multiple times. It's not reliable.

0] The probability that a random person in the population could look at your iPhone or iPad Pro and unlock it using Face ID is approximately 1 in 1,000,000 with a single enrolled appearance. As an additional protection, Face ID allows only five unsuccessful match attempts before a passcode is required. The statistical probability is different for twins and siblings that look like you and among children under the age of 13, because their distinct facial features may not have fully developed. If you're concerned about this, we recommend using a passcode to authenticate.

For comparison, touch ID has a probability of 1 in 50,000.

0] https://support.apple.com/en-in/HT208108


Fingerprints are left everywhere and even photographed from a distance. It is harder to 3D scan a face then recreate it in such a way as to fool Face ID although I am sure it is possible.


Not sure how FaceID actually works, but in my understanding it can effectively distinguish between your face vs everyone else's face. It can't uniquely identify and search your face across 7 billion fingerprints.

Otherwise I don't understand why iPhone doesn't support "guest" profile via FaceID. Or iPad doesn't allow multiple users(family) on the same device by simply "looking at it"


I can’t tell if you’re joking.


I wonder what part of the Web Auth APIs does iOS Safari support since MDN says most of the APIs are not supported.

https://developer.mozilla.org/en-US/docs/Web/API/Web_Authent...


I think that this is coming in the next release of Safari, so MDN is correct, it's not currently supported.


WebAuthn API is supported in WebKit starting from Safari 13 and iOS 13. The MDN chart needs to be properly updated.


Hardly, it only supports basic MFA scenarios with a cross platform Authenticator.

User verification + resident keys were never implemented in Safari 13.


The keynote a few days ago failed to mention FaceID used to opening your mac... and this video shows/mentions FaceID on a macOS11 website?

Seems they missed a step here with rolling it out, or are withholding the obvious. I'm assuming the latter.


This video shows Touch ID being used with a laptop, not Face ID.

The interface is generic, so it should use Face ID on devices that have Face ID hardware, and Touch ID on devices that have Touch ID hardware.


FaceID requires special cameras and infrared lighting system. Perhaps they would add that to the mbp one day but that would be a crazy leak if they showed off that capability now.


Apple supports WebAuthn on iOS/iPadOS as well (since iOS/iPadOS 13) so presumably FaceID part is referring to iOS/iPadOS.


Anonymous attestation is really interesting because, afaict, there really isn't a way to do this right now if my application wants to authenticate a "normal" user with a "normal" phone.


Isn’t this a classic example of the fragility of biometrics?

If I move to a new device, iOS should be required to give up whatever secret key my face translates to, so I can log into websites.

Simplistically, if iOS silently turned my face into the web password “g0rG0il3r”, when I eventually migrate from iOS to something new, I’ll have to be able take my face password with me, thus exposing that my face was only ever equivalent to a password in the first place?


WebAuthN is not supposed to be the only way you log into a service. The credentials are permanently tied to your Authenticator of choice, which can be lost or stolen at any time.

If you change devices you just provision the new one for your account after signing in with a traditional username/password(/2nd-factor).


FIDO/WebAuthn has been designed to be first factor authentication (passwordless) as well as 2FA.

Though, I agree lost and stolen devices are a problem whose solution space needs more exploring than simply multiple auth devices.


This is, of course, an active thread of discussion in the WebAuthentication working group.


Can you point me to that discussion? I'd love to read more about it.


My company recently started using Okta. This is the first time I've heard of webAuthn. Is there any relationship between Okta and WebAuthn?


Not as far as I'm aware, WebAuthn is a standard, and Okta is an authn provider. You can use WebAuthn with many other things, like the Yubikey.


This has been a very long time coming. Apple finally joined FIDO in Feb of this year, so we knew it was right around the corner.


I think it's pretty ridiculous that Apple pours time and effort into stuff like this but apps have been able to steal from your clipboard for years.

It reminds me of the phenomenon when researchers and engineers don't work on something that's useful for everyday users, instead prioritizing what they find exciting and cool. The security team is so busy dealing with absurd edge cases like nation-states attacking your enclave that they don't seem to care to address egregious and obvious holes in the security model like this.

It's not that WebAuthn and passwordless isn't exciting or useful, it's just that there are much bigger fish to fry (clipboard paste, terrible permissions management, non-shitty VPN support, trackers in apps) that Apple seems completely uninterested in addressing.

There really is no excuse, at least not when you're tooting the privacy/security horn so loudly, but let stuff like this pass by.

Edit: have y'all thought of an actual counterargument instead of just downvoting? Is it really too much to ask security engineers at Apple to focus on actual major privacy holes in their OS than things like this?


i don't want to focus too much on why i think you're being downvoted but i would say it's probably because your message came across as quite reductive.

> engineers don't work on something that's useful for everyday users, instead prioritizing what they find exciting and cool

i get this, to some extent. i really do. but i don't think WebAuthn, sign in with apple, ios 14's recent microphone and camera usage indicators, etc. are not huge steps forward in terms of mobile privacy & security. (rough double negative. you get me.)

i am pretty sure Apple knows about everything you stated, and would wager they are developing, or at least R&D'ing, effective, polished, "Apple" solutions to these problems.

clipboard is a bit of an obscure one, but is absolutely a problem and must be addressed. but Apple is in the business of juggling user experience and privacy. it's quite difficult, because you don't want to get in the way of the user's intents and make things way hard to do. but you also don't want to make things so easy that bad actors can get away with bloody murder.

granted, they still can, if they really want, but it's way harder. and slowly apple is killing the mice. it's a cat and mouse game. i think they're doing exactly what they should be.

could they be doing more? hell yes. they should -always- strive to do more. but right now, this is better than last year, and the year before that, and the year... yknow.


> i don't think WebAuthn, sign in with apple, ios 14's recent microphone and camera usage indicators, etc. are not huge steps forward in terms of mobile privacy & security.

For sure, but they are far less needed than the described issues. You see, the problem is that these holes have existed in iOS for a long time and affect the practical security of the everyday users.

WebAuthn is definitely the future of the web, but why can't we fix the giant holes in the ground before building upon it?

This goes back to the problem of researchers focusing on things that are exciting and will be the future, but not focusing on the needs of the everyday customer. This was the downfall of RCA back in the day.

> Apple is in the business of juggling user experience and privacy. it's quite difficult, because you don't want to get in the way of the user's intents

I understand this struggle. Still, most users are horrified when they learn that everything they've ever copied has been shared with the apps they are using. Even the layperson would gladly give up some trivial auto-paste feature if they knew this.

I would understand Apple's slowness to respond if this was a recent issue, a newly-discovered hole in privacy. But this is a big thing. It's been around for years. And many others have too.

Apple has engineers, they have PMs, and those engineers and PMs are working in the same problem space with these holes. But new features that won't matter for years are being prioritized over major issues that have existed for years, and that's not cool, not when you are yelling that you care about privacy from a mountaintop.


They just pushed clipboard usage detection as a feature in the new iOS... I think this addresses pasteboard concerns in a decent way, no?


Not at all. Detection doesn't stop the compromise once it has occurred. And given that many major apps use this functionality, it's not like users can really do anything about it even if they get that notification.


Does anyone know if iOS devices support multiple users/bio metrics per device and could log a specific user in when they are using a shared device? I'm wondering if this could be used for a shared iPad on a factory floor to log users into their own account on our web app.


Since we are really talking about WebAuthn and not just the iOS implementation, You probably could use alternative hardware (maybe android based) that supports the multiple-user biometric use-case. From the server/web-app side it still would just be WebAuthn.

Looks like you use a separate device like this: https://www.ftsafe.com/Products/FIDO/Bio

That supports fido2, hooked to any tablet to make it work.


It's not really supported for Face ID - you can create an "alternate appearance" for Face ID but it doesn't work well if the people look very different. With TouchID you can register fingers from different people and it works fairly well, but again not really supported.


What exactly does attestation do vs not having it? Can someone here explain clearly?


With passwords, the "proof" or "attestation" is that nobody knows my secret password. In practice this tends to be weak. Some reasons this is so includes:

- the password is low-entropy/complexity, so it is guessable or brute-forcible. - the password is typed into the system, so keyloggers might observe it - the password might be stored insecurity by the server - the password is transmitted over the network and might be intercepted - the password is transmitted over the network, possibly to unintended recipients (like a phishing site) and thus is intercepted.

One alternative is employing public-key/private-key crypto. The server sends some random data to the user. The user signs that data with their private key (this is the attestation part). Only the signed data, which is generated for this single authentication attempt is transmitted. And during registration only the public key is transmitted. The public key is used by the service to verify the attested data. The private key remains secret the entire time. Likely it remains secret even to the user's machine because it lives in separate hardware, such as a yubikey or secure-enclave. The key is also something that isn't going to be guessed because we are using modern algos with large bit-sizes.


Sure but how is that different from JUST logging in with touchID or faceID? Without attestation?


There is no mechanism for that. The biometric hardware is local to your machine. I suppose the hardware could emit a "biometric fingerprint" that is transmitted to the server. And if anyone acquired your fingerprint they could unlock any site. Ok, so you could add some site specific salt to the process before sending the data. This is pretty close to just having a local password manager where the master key is biometric rather than a password.

But touch/faceID aren't implemented that way. They intentionally don't expose the biometric profile, that is kept local to the secure enclave. They instead just give you access to the secure enclaves keystore. Rather than signing data you could just use uniquely generated public keys like a password, or do something like signing a websites name with a private key to generate a password.

However, these approaches don't really make sense. The advantage of public-key cryptography is that you prove who you are WITHOUT SHARING the private/secret key. This is much more secure, because it prevents threat actors that don't have access to the private key from replicating the "proof" or signing process. This is what attestation is about. You can design alternative attestation schemes, but webauthn is pretty simple.


If you don't have attestation, then I can write some JavaScript code that "return true" for the authentication callback, and log into any and every account.


The attestation is a signed (digital) document saying basically "We are $manufacturer and we made this $product and we promise it has these desirable security properties".

In WebAuthn the design is that a batch of (at least 1000 but usually far more) authenticator products should have such a document which Javascript can optionally request (together with proof they didn't just knock it off from another authenticator) when using the authenticator.

If you demand multi-factor and aren't willing to take my word (as the user) that I'm using it, you could insist upon seeing the attestation and reject authenticators unless you can see the attestation and you like it. For example maybe Great American Bank accepts Yubikeys, but rejects the Apple iPhone because they believe Steve Jobs was Satan.

Most sites should not use attestation at all. Firefox in particular can tell a site to fuck off when it asks for attestation. I'm happy to use high security WebAuthn but I don't want to have to tell you which products I use to do it. If your site does not require WebAuthn for every user then almost by definition it makes no sense to demand attestation from users who choose to enable it.

The use of "batches" is a privacy safeguard. If you permit attestation a site might know you have a Mattel Barbie Authenticator, but it won't know which one. If Mattel aren't selling many they probably put the same batch on the Buzz Lightyear Authenticator so a site can't even tell if you've got a Barbie or Buzz Lightyear.

According to this video apparently (?) Apple thought that wasn't safe enough and so it has decided to do something else weird instead, but not yet. Whatever, for almost all web sites you should refuse attestation if given the option. Maybe my bank needs to know I'm doing MFA with a high quality product but there's no reason Facebook or GMail or anybody like that should ask.


The point of the video was that when using the device as the authenticator, attestation reveals details of the phone (such as the unique private key used to prove the phone is valid to a manufacturer). The anonymous attestation authority here allows Apple to be assert to the qualities of the device without the device having to reveal identifiers externally.

This is akin to a batch of identifiers the size of all Apple products, while still allowing the device owner (or Apple) to disavow a particular device if it is lost or stolen.

The implementation also ensures that the same device creating multiple identities for the same website will have no signing characteristics linking one account to the other.


It doesn't reveal "the unique private key" that would be crazy, the revealed key is a public key. And mostly sites should not ask for attestation and users should refuse to grant it if asked (Firefox asks, you can just say "No" but I'd be comfortable with clients just always saying "No" on my behalf instead)

There are already designs if you are quite sure you must have attestation and yet you don't want device identification. You can do blinded attestation and agl has written up a much fancier approach on his blog too.

But again, Don't Ask, Don't Tell. The video shows this silly demo "Shiny picture" site asking for attestation and that's a bad idea you should not replicate, write "none" instead of "direct" and then the problem goes away for your site.


Called it, but it took longer than expected: https://news.ycombinator.com/item?id=18141918

The Safari team do move slowly.


I miss touch id. Face ID is one of my least favorite things apple has done


1Password users can already have effectively the same experience.


Not as securely or cheaply: using 1Password this way either requires less secure TOTP codes (which are easily phished) or a separate token.

Having this available to every Apple user on the web is huge, especially when you look at the network benefits of the Apple feature pushing all of the slackers (hi, every large financial company!) to implement secure MFA.


How are TOTP codes more phishable? Seems like the same phishability to me


If https://fake-bank.example/ persuades you it is your real bank you can just type your TOTP code into it, and now the crooks operating it have a valid TOTP code. Nothing stops you doing this, it relies on you to know it's the wrong site to protect yourself and that's not reliable.

Machinery to take that TOTP code and immediately plug it into the real bank (since it's time sensitive) exists already.

In contrast WebAuthn credentials are tied to the domain name of the site. Your iPhone doesn't have any credentials for https://fake-bank.example/ so it won't sign you in, and even if it did have credentials for fake-bank.example they'd be completely useless on the https://real-bank.example/ web site. There is no way to give real-bank credentials to fake-bank, it just can't work because the cryptographic material used is tied to the domain name.

Google deployed an earlier iteration of this same technology and reported zero phishing for accounts protected this way because it isn't possible to see how to phish it without some grave security bug somewhere. This is the penicillin of web user security, it's a night-and-day difference over what we had before.


Apple users will get this natively without having to acquire 1Password. If you’ve bought into the Apple ecosystem and don’t have needs outside of it (Windows, Linux), you can eliminate the need for a separate password manager. Similar to how iCloud Files is moving towards (but likely won’t meet, while not needing to) Dropbox parity.

This is making a friendly version of Yubikeys (using Apple devices) and password vaults for Apple users.


For interested readers: 1Password does a few more things. For example, you can add 2FA to 1Password logins, so that 1Password replaces Google Authenticator with the immense advantage that you don’t have to setup 2FA again if you get a new device.

Just a happy 1Password user, nut related to them in any way.


I use LastPass for passwords and Authy for 2FA. I like the idea that two different programs have to be attacked to get access to Google, Facebook, etc. There's a little bit more friction than having both in one program but that's the point.


Is it really 2FA if your password and your token are on the same device?


One could argue that authenticating via 1Password is already multi factor in itself e.g. Master Password is Something You Know as the first factor and access to a 1Password Vault is Something You Have as a second factor (since you cannot login to 1Password with just username and password, but also requires a Secret Key that can only be acquired from a device that already logged in).

In this case TOTP acts more like an insecure one-time session key.


Yes. 2FA protects, among other things, against compromised passwords. Having both on the same device does not reduce this protection.


Thanks I didn’t know this.


I'm more of a fan of buy-in without lock-in.


so one more instance of Apple effectively rendering a third party app useless. As an apple user, I love that I do not need to install an additional app but something does not feel right from an ethical standpoint. Or maybe I'm just being too touchy.


You feel that way because it's ethically corrupt behaviour on the part of Apple. They are aggressively pushing out others.


I would rather have a native UX versus Bitwarden, especially if there’s no additional cost (I have to pay for Bitwarden annually or spend time running a server) and an easy way to share creds with my partner (for delegation in the event of my passing).

Competition and a free market has perils. May the best solution win.


You can use Bitwarden for free while using their servers.


Not for family plans, unless something has changed in the last year.

Big fan of these feature being native client side. These are features, not products. We should all be a fan of improved security delivered to as many people as possible, with as little effort on their part.


Unfortunately Face ID has become much less useful since many of us began using face masks. I wish I had touch ID back on my iPhone.


There's surprising support here for the faceid stuff, despite it's clearly made to normalize using biometric data for everything and make one step towards the survelliance state. But let's pretend we are more concerned with security of this approach: how is it better than a ring with a chip you'd wear and use for auth? If this identity is compromised, you could just get another ring. And you wouldn't need to give your real identity to apple.


> How is it better than a ring with a chip you'd wear and use for auth?

Normal people would actually use it. And this is in fact not different. The phone IS the ring. You just unlock it using your biometrics (and of course, the data is NOT given to Apple)


All biometric processing happens locally. You aren't sending your fingerprint or face, they simply unlock the private key in the enclave.


Sure, at this point Apple's solution likely works this way: they care about their reputation. But even a secure faceid opens the Pandora box: today it's just iPhone users, 5 years later it's everywhere, including in solutions from Huawei and 10 years later it's the law. Afaik, Delta airlines already trying to use faceid. And once faceid is required to buy gas and groceries, your complete and nuanced dossier will be available for sale on all these data exchange brokerages. In this beautiful future, faceid is needed to unlock any car and it would play you ads for 3 mins while it's starting.


Can a person have more than one account?

Does google have a similar thing?

Because overnight we can finally stop sybil attacks! Right?


I wonder how these face detection algorithms work for transgender folks...


Unless they're getting extensive facial surgery, their phones will acknowledge them.


Have transgender people had issues with FaceID?


Apple is considering this to be multi-factor authentication all in one click, the something you have (the phone) and the something you are (FaceID or TouchID). For the site perspective, if you ask for attestation then you will have cryptographic evidence of this. No more SMS 2FA!

Apple is promising to do something extra with their attestation process which they call "Apple Anonymous Attestation" to mitigate the issue where attestation allows tracking the same device across different websites even if they are using different usernames. Not included in the current release, but "coming soon".

The process of enrolling an authenticator is presented as a "one and done" event, but of course the devil is in the details. Shared accounts would need multiple authenticators, and what happens when you upgrade your phone or lose your phone? I guess this gets handled like a password reset. You probably will want users to be able to add/remove authenticators, which means also having to name them.

The enrollment process in this video highlights the upgrade path from a password login to a FaceID/TouchID login, but doesn't give us the UI flow of a new login. It seems like sites would need to implement a standard registration page and then perhaps swap out the password field with a Next button which would prompt for the FaceID/TouchID enrollment. Makes me wonder how does this compete with or tie into the 'Signin with Apple' if a site wants to offer one-click registration?

Will my Mac and iOS devices automatically and securely sync the private keys between their respective Secure Enclaves so that I can signin from any of my devices after enrolling on just one?

Lastly, how about the case when returning to a site where the user is on their enrolled device, but there isn't a cookie present. It looks like the site can detect that the device supports Webauthn, but I'm not sure if it can automatically detect that the device already has an account enrolled with the site?

The call to 'credentials.get' includes the 'credentialIdBuffer' which is a value provided by the platform authenticator and saved during registration. But in the video they make it sound like 'credentialIdBuffer' is actually optional? It's not even clear to me in the official WebAuthn spec [1] if this value is required, or if the authenticator will just use the RP domain to present the user a list of available credentials? Ultimately I'm wondering if a user without a cookie will still have to type in their username before the site can prompt them for FaceID/TouchID authentication.

[1] - https://w3c.github.io/webauthn/#dom-publickeycredentialreque...


The value is not required, that's why it says "OPTIONAL" in capital letters at the link you gave.

WebAuthn has two scenarios. For something like an iPhone or a Windows laptop we can choose to have the device store a heap of credentials mapped to domain names. So your Safari sees this is cat-videos.example and the iPhone knows credentials for cat-videos.example so .get() fetches those credentials without needing IDs and the site doesn't (if it does things this way) need to ask for a username first. This is called "resident key" because the device remembers which keys it has and you can have your site say during registration that you prefer this outcome, or even that you insist upon it by setting it as "required".

The other scenario is older and very clever, it is used for things like a cheap FIDO USB dongle. It stores nothing, the device doesn't know who you are. This means it cannot provide that seamless "no need to enter your username" UX, however it does work well as a second factor and can be used as a sole factor if you want. How can it work if the device doesn't remember credentials? Well the WebAuthn ID is deliberately big enough to "hide" encrypted keys inside it. Instead of remembering your credentials after registration like this Safari feature, a cheap FIDO dongle encrypts the private key it'll need and then gives that to the web site to store as an ID, alongside the public key. When you give the site your username, it fishes out the IDs associated with your user and hands those to get() in that buffer you spotted. The user's web browser shows these IDs to any FIDO dongles and if one of them made that ID it can decrypt it successfully and authenticate you.

So the overall answer to your question is: It depends.

The cheapest simplest WebAuthn setup uses it only as a second factor and would need a username every time.

A very nice UX focused on being as seamless as possible for Apple users wouldn't need a username unless you're using a device you haven't authorised this way before. The video shows such a UX because that's what Apple wants you to do for their users.


The allow list is optional. If you don't provide one, Safari will show all accounts on the device registered with the RP, or fallback to ask security keys if there is none.


Happy to see Apple following an established standard for once.


Face or finger data is akin a login name... What is the password in that case? Because if you cant change your password, that could be a problem


The biometric data never leaves your device. Your biometric is only used to unlock a private key stored in the secure enclave of your phone. This private key is then used to sign a server-sent challenge. The phone sends this signed challenge back to the server, where the server can validate it using the public key.

The public/private key gets generated by the phone upon first registration with the server.


So if someone has your phone and your face, not only can they unlock your phone but also the services that you are using?


Face/finger data is never revealed to the web sites. You can read webauthn for more information.


Face Id is a terrible idea in the age of westerners wearing masks or south Asian countries


The last I read, if you wanted security then Face ID and Touch ID definitely weren't the way to go. I'd rather see Apple pick up something like SQRL[0] than continue down this path of pseudo-security. They work, but it's like having a half-blind doorman who can't tell if you're wearing a mask or if it's your real face.

0: https://www.grc.com/sqrl/sqrl.htm


On Android any way you've set up to unlock the device can be used to also authenticate locally for unlocking the key material (stored inside a TEE): swipe pattern, finger print, PIN code, password, etc. Once that is done the key material is used with WebAuthn protocol which is fairly straightforward public key crypto operations that have been well vetted.

Based on this video we don't know exactly if Apple allows that to be done as well, but it seems within the realm of possibility based on how Apple Pay works (which AFAIK can use a passcode).

Either way discarding the entire protocol because of one implementation is silly. With Apple joining Google, Microsoft and many other companies supporting WebAuthn (a W3C standard) there's a lot of support for it to make logging in easier and more secure. SQRL never got any real traction.


> The last I read, if you wanted security then Face ID and Touch ID definitely weren't the way to go.

Sounds vague and overly general. I don't think anyone can take this seriously without some more information.


You are but a duck-search away

Face ID defeated with glasses and tape https://appleinsider.com/articles/19/08/08/face-id-security-...

Touch ID defeated by lifted fingerprints

2013: https://arstechnica.com/information-technology/2013/09/defea...

2016: https://appleinsider.com/articles/13/09/22/apples-touch-id-a...

2019: https://www.forbes.com/sites/daveywinder/2019/11/02/smartpho...

Biometric "security" on phones is a gimmick.


These demos are useful to help understand the limitations of these security measures, but hardly invalidate them.

E.g. from the first article you linked:

> the attack is only really useful against unconscious victims, requiring both physical access and the tricky move of placing glasses on their face without waking them up.

All authentication mechanisms have limitations, BTW.


Just to state the obvious...

Biometric data must always stay on your personal device in order to be secure from replay attacks, not to mention finding out more about you.

https://amp.theguardian.com/world/2019/sep/04/smile-to-pay-c...

Of course, in Apple’s implementation, the data never leaves the device. Which is far better than, say, how facial recognition is used in China for payment where the merchant is the one operating the machine which scans your face.


The video mentions the data never leaves the device and that the transaction is performed in the secure enclave of the phone.


Giving my finger and face prints to the browser, the software with the biggest attack surface in the world, connected to internet no less, feels off to me.


All of that is managed by the Secure Enclave in exactly the same way that it is for all over auth on Apple devices. The browser doesn't touch it at all.


Oh the almighty Secure Enclave, bow down to the Enclave...

Do you even know what the heck an enclave is and how does it work? It's nuts that when a figure of authority uses a fancy shiny new word to describe some magic black box and the masses follow with no questions asked.



The concept has been around for years now, it is well understood.



Do you know how it works?


That's the point, not many do



K, but that doesn't mean the person you responded to doesn't know how it works. You are the one that doesn't know how it works, so why are you expecting people to listen to your opinion about the Secure Enclave?


Native apps have long been able to use TouchID/FaceID and have never been able to access the actual finger/face print data. There's no way that iOS exposes your actual finger or face print to websites.


That's not how this works on apple. Basically your fingerprint scan is not sent to the browser, just the authenticated result of the scan (ie, yes, the fingerprint was so and so's).

There have been some past complaints here about how Apple used to tie their fingerprint sensor into the secure element (third party folks couldn't substitute them). I actually liked that rule from a security standpoint, but obviously unpopular here. Same approach with faceid / touchID for the web.


They're not giving your fingerprint or face image to the browser. They're simply providing the ability to authenticate using iOS hardware.



This isn't that revolutionary: LastPass already allows you to use biometric ID to authenticate and it works without any changes to the website.


It also doesn't add any security. Your password can still be guessed or phished. When authenticating with a cryptographic token (U2F/WebAuthn), that vector goes away. (Even OTP can be phished... the phishing site can just ask you for the code.)

Password managers do make it more difficult to get phished, since they will not know what password to autofill on phishing.example.com... but on the other hand, password manager users are used to having to force-fill a password. I have to take my password out of LastPass to log into Battle.net or Fusion 360, and websites like the Wall Street Journal create your account on dowjones.com but require you to log in on wsj.com (or maybe it's the opposite, I forget).

With WebAuthn, more care is required for both the site operator and the user (have more than one way of logging in in case you lose your phone, make sure the enrollment and login origins are the same), but you are then open to fewer attacks.


WebAuthn is less phishing resistant than it should be. The original intent was that WebAuthn + token binding would ensure that, even if an attacker obtained a fraudulent certificate for a victim site and had an MITM position on the network, the attacker still couldn’t steal a WebAuthn protected session. Alas, Chrome removes its token binding implementation, and WebAuthn no longer has this property. If you authenticate with WebAuthn, and there is a MITM, the MITM gets your session.

(Conventional phishing is still prevented. If you go to, say, g00gle.com, the owner of g00gle.com can’t reuse your authentication to authenticate to google.com. But this relies on your browser actually knowing what domain it’s looking at, which relies on the CA system.)


The WebAuthn specification still explains how you could get token binding if it's present, it's just that it isn't present on any major implementations today.

I'm not sure I believe that real bad guys can successfully attack the Web PKI yet would be foiled by a site using token binding. I think crooks sophisticated enough to burn an exploit to get themselves a fraudulent certificate and put themselves on path for the main attack probably just break into the actual target web server and dispense with everything else entirely. I'd welcome token binding as an option, but I expect I'd never use it in my software.


If you include attackers who control an organizational MITM CA in your threat model, then this could be a big deal. If I were, say, a bank, I would like WebAuthn to protect me against compromise of an organization’s MITM box. Token binding can do this to some extent.


Surely in most MITM box scenarios the token binding just isn't possible?

The only correct MITM box design for TLS is back-to-back client and server, and with that structure there are two TLS channels instead of the one you expected so you can't bind anything to "the" channel between your client and the destination server as there are in fact two channels.

Hacks to try to do something else invariably break and make everything worse. The resulting wreckage for TLS 1.3 took a year of engineering plus an extra year of whining MITM box owners reluctant to stop doing broken crap. We certainly don't want to encourage more of that.


This is true, but it does prevent replay.

So you are vulnerable to an active MITM, but they don't actually acquire any private secrets to be used to create their own sessions. MITM with passwords OTOH, gives the attacker your password.


That depends on how the site works. On most sites, when I authenticate with WebAuthn (or by any other means), I get a bearer token good for several weeks. So a single spoofed WebAuthn session gets the attacker access for quite a while.


AFAIK Microsoft added token binding back in the Edge version based on Chromium. As other commenters pointed out, token binding is still part of the WebAuthn spec.


Security seems roughly equivalent, there is always a fallback method of authentication in case the user changes their device or forgets their password. Even biometrics on an iPhone can fall back to a 4 digit pin code.

I agree there are advantages to using public key crypto but the reality is that it's more difficult to get right (and therefore not implemented) compared to a simple hashing function for a password.


What happens in the instance that one wants to move away from webauthn?


Right but Apple have done it and they have actual market share and 100% control of the end to end device


No way this biometric data could possibly be abused, right?

I'll stick to taping over my cameras, thanks.


it never leaves your phone.


Yeah, and my personal data never left Equifax's servers either.

You can't change your biometrics when they are inevitably hacked. If you even find out.


For all Apple's faults, they're pretty open about how their Secure Enclave works. I think they consider privacy to be a key differentiator, particularly when compared to Android and Windows. you can see this in how they didn't open a phone even given an FBI request.


This personally identifying data is forever one forced update away from abuse. The fact that it's "normal" to take such a risk for such a minor convenience does not mean that it's a good idea.

But I recognize that not everyone is so paranoid. Though in the current political climate, where corporations are clearly choosing sides, you probably should be.


They didn't develop the capability to backdoor phones at the FBI's request.

They are generally happy to hand over iCloud backups, which they did in that case and the FBI "lost" them IIRC.

It was also an iPhone 5c, the last iPhone without a Secure Enclave, I believe they were able to get in with GrayKey.


> They are generally happy to hand over iCloud backups

That one they legally have to do when given a subpoena.


They could encrypt everything and not have the keys. So not really.


The whole point of the backup is that you'll be able to access it even if you your device and it's keystore are destroyed.


And end-to-end encryption doesn't have to break that: https://security.googleblog.com/2018/10/google-and-android-h...


That's not end to end encryption, the key is stored in a HSM in Google's data center in that case. They can be subpoenaed for it.


I don't think you understand what the HSM does. By design, the HSM's secret keys cannot be extracted. Not by the physical possessor of the HSM, nor the manufacturer, nor designer. That is the whole point of using an HSM for this. A subpoena cannot compel the impossible.


Nearly all HSMs that store an arbitrary number of keys can be compelled to dump those keys via a special firmware update from the manufacturer of the HSM, or at very least remove checks to allow it to be used as a decryption oracle.

Apple was able to say no, because they weren't in physical possession of the HSM, which meant that they couldn't be subpoenaed for information that wasn't actually in their possession, but a judge wouldn't look as highly on google's case.


The firmware on these chips erases the keys before applying firmware updates. I encourage you to read the detailed information available rather than just making assumptions about it, or even just the short blog post I linked which states this explicitly.

In the San Bernadino case the FBI had physical possession of the HSM, so Apple could have attacked it physically. That's not related to the reasons why the FBI gave up on that case.


> The firmware on these chips erases the keys before firmware updates. I encourage you to read the detailed information available rather than just making assumptions about it, or even just the short blog post I linked which states this explicitly.

I read the blog post _and_ the third party security audit. The audit only documents that rogue actors within Google would leave a attestation trail if they tried to push malicious firmware and be noticed by Google proper. My concern isn't rogue actors but Google itself. Additionally the Titan chip in my pixel has received firmware updates without wiping it's storage.

> In the San Bernadino case the FBI had physical possession of the HSM, so Apple could have attacked it physically. That's not related to the reasons why the FBI gave up on that case.

Right, so the legal distinction between "we want a piece of information in your possession that you have decided to lock from yourself" versus "we want your help receiving information that we have in our possession but can't access" is a very very big difference from a warrant perspective.


The audit report does not explicitly state that the keys are erased on firmware update, but malicious firmware updates were specifically in scope for the audit, and this specific attack was not raised as an issue, and the blog post explains why. The Titan chip in your Pixel is not running the mentioned custom firmware that erases the keys on update (and malicious firmware updates to the HSM in the phone were not in scope for the audit).

It is not at all clear that the FBI would have lost if they had continued to pursue Apple in the San Bernadino case. The distinction you are drawing is not as clear cut as you think it is.


When a core piece of their security model isn't backed up by the third party audit that they literally are presenting as "don't trust us, we have a audit covering this" (and the auditors did look at how google protects against malicious firmware updates, hence their attestation comments), _and_ when that would leave them being unable to update these modules without wiping everyone's backups, _and_ the auditors found security bugs that required a firmware update, _and_ literally the same chips are updated without wiping when against a threat model that has a better argument for wipes on update, I'm sorry I just don't believe the blog post.

Bringing this back to the original point though, just using an HSM doesn't automatically mean that the manufacturer of the HSM can't access the keys.


I quite rightly trust Apple more than I trust Equifax.


Watch the video where he shows the JavaScript APIs being called: the website sees a standard public-key process using WebAuthn. You can use anything which can perform that protocol with zero change to the system — if someone steals your phone, you don't need to change your face since the new phone will generate a new key and the biometric data is only used to unlock that key.



Oh the almighty Secure Enclave, bow down to the Enclave...

I see so many comments mentioning Secure Enclave to any security objection as if it's a panacea. Do you even know what the heck an enclave is and how does it work? It's nuts that when a figure of authority uses a fancy shiny new word to describe some magic black box and the masses follow with no questions asked.


You may not know what it is but it's been well documented for years detailing what it is, what it's used for, and the built-in tamper-resistance features:

https://support.apple.com/guide/security/secure-enclave-over...

https://www.apple.com/lae/business/docs/site/iOS_Security_Gu...


> Do you even know what the heck an enclave is and how does it work?

It's a isolated smart-card-like KMS core within the SoC, that has most smart-card guarantees (e.g. no side-channel attacks; tamper-resistance.)


They've been fairly open about how it works, and since then the binaries have been disassembled and backed up what they were saying.

It's a pretty modified L4 (I want to say L4::Pistachio off the top of my head) that for some reason has had Mach-O support added and pretty much just acts as a keystore with a secure but upgradable boot sequence.


Please don't spam the same comment


I literally mentioned it one other time as a reply to a comment, that's hardly spam (the m stands for mass, fyi)





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: