Hacker News new | past | comments | ask | show | jobs | submit login

This is based on the open standards WebAuthn and FIDO2, where the credentials (“passkeys”) are synced via iCloud Keychain. Currently you need remember to register at least 2 security keys, in case one is lost/misplaced. The syncing of passkeys in iCloud solves this backup problem.

https://fidoalliance.org/apple-google-and-microsoft-commit-t...




>Currently you need remember to register at least 2 security keys, in case one is lost/misplaced.

This is always my issue with 2FA or passwordless auth. You're forced to have 2 devices and are kind of screwed if you don't hvae two on you.

I was on a trip and broke my iPhone. It had my plane tickets on it to get home. I was able to get a replacement from Apple, they just gave it to me and sent me on my way. When I turned it on it wanted me to authenticate with one of my other Apple devices. By dumb luck I happened to have my iPad with me. If I didn't have that, I'm not sure what I would have done.

A co-worker told me to move all my 2FA to Authy as a means to avoid locking 2FA to hardware, but I haven't sufficently looked into it yet.

While I don't like passwords and understand their very real security limitations. I'm also not a fan of my phone becoming my identity.


> as a means to avoid locking 2FA to hardware

Tying 2FA to hardware is for most of the common use cases a bad idea. Instead always use TOTP and keep the seed in a secure storage with multiple backups.

If on top of that you like to keep it on your phone to generate the code that way, fine. But at that point you can destroy the phone and it doesn't matter, you'll still have access.

> While I don't like passwords and understand their very real security limitations.

When used correctly, password are a fairly great solution with fewer limitations than competing solutions. By properly I mean generate yourself 128 bits from /dev/random, never reuse and store securely.


> Tying 2FA to hardware is for most of the common use cases a bad idea.

For me, I don't consider that to be true.

I have a Yubikey on my keyring, and a backup Yubikey in my safe.

Losing my keys is an extremely rare thing (I've never actually lost my keys, closest I've come in the last 30 years is temporarily misplacing them or locking them inside).

I'm happy enough to deal with losing my digital access (via 2FA) tea[orarily under the same sort of circumstances where I've lost my keys. I might need to call a locksmith to get inside my house/car if I've locked them inside, or possibly to get me inside so I can replace the locks (and get my backup Yubikey out of the safe).

When I travel for work, I at least try to make sure I can get into critical systems using TOTP (on my phone and backed up with cloud accessible seeds), to protect against losing th4e Yubikey while abroad. I don't usually bother doing too much of that when I'm on vacation travelling.


> I'm happy enough to deal with losing my digital access (via 2FA) temporarily under the same sort of circumstances where I've lost my keys.

To me the critical difference would be that my house keys are single purpose and only serve at a single location. I lost/broke keys a few times in my life, and the only issue was to wait outside the house for a few hours.

I didn't need to authorize 3d secure transactions when paying for the hotel or a taxi, didn't need to authorize accessing my Gitlab account at work, nor validate that I'm really me in the flurry of 2FA services. Nowadays phones and computers are more akin to wallets, and I'm actually more in trouble when losing access to my phone than when losing my wallet.


Absolutely this. I can lose my wallet - I know how much that's realistically going to cost me and I'm protected against fraud and theft regardless, save for the headache of making a few phone calls.

The world really hasn't appropriately quantified our reliance on little black fondleslabs.


To echo the sibling commenter, perhaps this rule applies:

When dealing with hardware, hardware access tokens should be required. When dealing with software, software access tokens should be required.

That way, you never have hardware compromised by remote tokens, and you never lose access to your software because you lost some hardware.

E.g., hardware tokens to login to a laptop, but only a software token (password) to get on a flight.

Of course there are a lot of use-cases inbetween with varying needs (escrow boxes with digital locks, intelligence services who need to verify the identity of their agents when entering/exiting premises), but I posit those come with requirements outside of the "ordinary".


You should have a third in a safety deposit box at a bank or other offsite location.

The issue is if you have a fire and both your keys are melted, you're f'ed.


But the entire issue is that needing to enroll all your keys every time you gain access to a new service is directly at odds with keeping one copy/key in a second, safe location.

If there were one more layer of abstraction where all N of my keys prove that I'm "me" (or proves that I'm some entity) and the "me"-ness is the principal that gains access, that would be nice, but that's directly at odds with not wanting to rely on some third party identity/authentication provider.

Authentication is hard.


That's an issue of the implementation, not of the concept of hardware 2fA. With SSH keys, you don't need them around to enroll them, as they use public key cryptography. I can point someone else to github.com/est31.keys and they can give me ssh access. The actual ssh keys can reside on hardware.

For some reason, this use case was not considered for Webauthn.


Question for you about this as I've often considered it.

What is the life of a yubikey? Do they degrade over time horizons?

The reason I ask is that if you have a backup that you never hope to use it's likely to be accessed only very rarely - which makes me kind of wonder what if your primary yubikey fails in 15 years due to natural wear/tear/degradation due to the passage of time and your backup has succumbed to the same problem due to being just as old?


> What is the life of a yubikey? Do they degrade over time horizons?

I don't think it's an issue in practice, certainly not for someone using them as they were intended, even heavily, but in theory a JavaCard implementation (like most of the smart card ecosystem, Yubikeys are still JC devices as far as I know) could "wear out" from use because of the way they work internally[1].

I've never personally seen that happen, and all of my Yubikeys still work, even the ones I bought over 10 years ago which were used far more heavily (20-30 ssh/gpg/piv operations per hour, every day, for years) than most people would use a FIDO key.

I've only managed to break other manufacturers smart cards by severely misusing them (as a USB-connected Linux HWRNG, I doubt the RNG command was designed to be called every few seconds for years).

[1] The JavaCard standard requires certain (all? I can't remember, it's been a while) objects in applet code to be written to persistent storage (meaning flash/eeprom), which has endurance limits. In practice they're not expected to be treated as permanent storage devices, if a card fails it's supposed to be replaced with another, revoke the old key pairs, register the new ones, etc.


It's solid state electronics, if not subject to any external factor (which should not be common to both) it will just keep working on any timescale that matters here.

I've carried one in my pocket for ~10 years without a problem, now I want to replace it because it's too old to support ed25519. That's likely a fraction of its useful hardware life.


> What is the life of a yubikey? Do they degrade over time horizons?

Not that I could see over about 4 years. I've been using one YubiKey (USB-A) for over four years now and a second one (USB-C) for over two. Each has been carried with my keys for at least two years.

But the right approach anyway is to use this excellent tutorial: https://github.com/drduh/YubiKey-Guide and generate your keys yourself, storing a backup in a secure location. This is what I do — so even if my keys get completely destroyed, it will be possible to recreate them from backup.


Any OTP can be phished. There is no automatic authentication that the OTP is being given to the correct party.

A public/private key alone would have a similar issue, but the browser for FIDO keys gives the domain it's actually talking to. The domain is authenticated with TLS or the browser on an uncompromised machine won't send that domain over. The device only signs the challenge with the private key generated for that specific domain.


While you're technically correct any authentication system worth it's salt would ideally see the same user trying to authenticate from two different locations, and prompt the second user for another factor of authentication (Email, etc.) And since TOTP expires it's not like they could sit on the token and use it later.


The OTP is already a second form of authentication. An email link would be a third form. I never saw that when I used Authenticator.

Anyway, the user would likely still click the link in the email since they are trying to log in.


TOTP is prone to phishing. Not to mention that you still need a password so it's both insecure and hard to use. You could ague that "When used correctly, password are a fairly great solution " but as passwords are flawed they are exploited and even the "experts" happen to fall victims to "improper" use of passwords.


Only if by "never reuse" you mean "never ever log in after the initial login". The problem that WebAuthn/FIDO solves is that even if you read my encrypted communication, you won't be able to use it to gain access to my identity.


A software implementation of WebAuthn requires a TPM module and to be honest I think privacy and user identification are more of a security problem on the web than being phished for passwords. The problem I see with Fido2 is that they consider a far too narrow corridor of threats.

Sure, for devices that need to authenticate themselves it is a decent or maybe the best solution. For me as a user? I am not convinced. It cannot compete with passwords.


If you've managed to insert your malicious code in a place where you can bypass TLS, secrecy of the password isn't my main concern anymore, as all is lost. It's not a threat model I worry about in most circumstances (sure there's always exceptions).


> always use TOTP

TOTP can be phished or man-in-the-middle'd and isn't as secure.


Fair, but I am a nobody that is unlikely to be specifically targeted. I am willing to swing the balance towards convenience/backup safety vs utmost security.


A common misconception. After credential stuffing (which 2nf factor protects you from), your biggest threat (for people with 2nd factor) is phishing and keystoke logger, which does not require any targeted attack.

OTP is way less convenient than fido keys, so it's both convenience and security. The only downside is the cost, and the effort required for registering multiple keys which is easily compensated for by the ease of use during authentication than OTP.


But then I'm not aware of a FIDO key that works with random apps on an iPhone. That's where I, personally, have by far the most logins and spend money. Pretty much all of them support some sort of authenticator app nowadays though.


Apple’s implementation uses SMS as a backup. Thinking is probably that if you only have one device, it’s usually your phone; so you would have been able get your 2FA code via text. It’s not easily discoverable though, so easy for you to miss it.


You can use SMS as a backup 2FA to login to your online Apple ID account, but that's not enough to access the iCloud keychain.

The decryption keys for that data are only stored on your iDevices. It's E2EE after all. So while you can access your Apple account via the SMS 2FA backup, you won't be able access your actual iCloud Keychain data/passkeys without some sort of access to your iDevices. (it might be sufficient if they're online somewhere and you have their login credentials?)

A bit confusing, but if it really is E2EE, then you can see why SMS alone wouldn't be enough to recover your Passkeys.


There is a procedure for recovering access to the E2EE data in the event that you no longer have access to any of your Apple devices.

https://support.apple.com/guide/security/secure-icloud-keych...


> Apple’s implementation uses SMS as a backup.

I hope they'll go away from this, or at least give the option. I won't use their password/key storage until they do. 2FA is only as good as the weakest link, and SMS is the weakest possibility.


I don't think they can get rid of it, as not everyone using Apple's services has a supported Apple device.

They don't offer a standard like TOTP, so SMS is the only option.


Is it possible to disable SMS at the carrier level?


2FA is as strong as the strongest link, not the weakest. You need both factors, not either factor.

In this case, it's just that one of the factors has a weak backup option.


Until the "try another way" option is a weaker form of 2fa, like sms.


So if I have a single device, a phone, and it gets stolen... what is the path to get my data back? And in the interum, if the theif swaps my SIM into another phone they now have my 2FA via SMS?

This all seems very messy when bad things happen.


They solved this with a feature called “Recovery Contacts” in iOS 15(?). You can set them up and they’re people who cannot access your account but can help you regain access if necessary (such as your one device case).

I think you still need to know your password, but that’s pretty reasonable.

They also added a similar feature to allow you to get into a loved one’s account/phone after their death if they set it up.


>They solved this with a feature called “Recovery Contacts” in iOS 15(?).

That doesn't solve it for me. None of my trusted contacts has an up-to-date Apple device.


I think the answer to the "stolen SIM" from Apple may just be "use e-SIM".

I agree the inability to remove SIM as backup 2FA method is troubling. I would sign in blood any liabilities to be able to remove SIM as a backup auth.


Sim cards can have pin codes


Get an e-sim


A password can very well be as secure as the ownership of a device. Compared to most 2FA schemes I love them because they are simple. I think if people are trained adequately it isn't an insurmountable barrier. But the industry never did develop good practices and bad ones are still around.

I don't like to have my key chain in the cloud at all. Loss or lack of access is far more likely this way. I already hate that services profile my device or location.


> I think if people are trained adequately

When will this happen? How will it happen?

Websites/services just make this way too difficult. Banks will host official services (that require login) on domains like www2.citionline.com with no way to know whether it's legit or not.

Apple has a marketing site at offers.appletvapp.apple which leads to prompts to sign up - how is any normal person supposed to understand this is legit? That domain is virtually indistinguishable from some phishing site at apple-iphone-offers.online


Password managers like Bitwarden can do TOTP with syncing. Doesn’t help with Apple though which uses non-standard 2FA. I actually have some accounts using SMS still, because I can fairly easily get a replacement sim if the device is lost.


You don’t need to have your second device to get past 2FA in a disaster scenario. You can phone up Apple Support and as long as you know the code you use to unlock your old phone (which is a reasonable ask), you can regain access to your iCloud account, and hence to your iCloud Keychain/devices.

Apple don’t know your unlock passcode, so they can’t do it for you, but this is designed to cover the “my house burnt down with all my devices inside” type of scenario, or your own “I no longer have access to a device”.


> When I turned it on it wanted me to authenticate with one of my other Apple devices. By dumb luck I happened to have my iPad with me. If I didn't have that, I'm not sure what I would have done.

As per the Apple FAQ[1]:

"you can get a code sent to your trusted phone number via text message or an automated phone call instead. Click Didn't Get a Code on the sign-in screen and choose to send a code to your trusted phone number. "

[1] https://support.apple.com/en-gb/HT204915


Oh wow. I thought those keys were the foundation for real end-to-end encryption, i.e. Apple doesn't have access to them. Does this mean their "E2EE" is basically fake?


> Oh wow. I thought those keys were the foundation for real end-to-end encryption, i.e. Apple doesn't have access to them. Does this mean their "E2EE" is basically fake?

I'm afraid I don't follow.

I don't know what you're talking about, but I thought I was talking about an the manner in which Apple provides an alternative to Apple hardware 2FA.

i.e. "normally / if available", Apple will do 2FA on your account by virtue of you being already logged in on another device. HOWEVER if that device does not exist (or you only own one Apple device), then as per my FAQ link, Apple DO provide an alternative mechanism that DOES NOT rely on the existence of a secondary Apple device.

This methodology is no different to any other 2FA alternative mechanism (e.g. "backup keys" or other websites/services that also use phone/SMS as backup, e.g. Microsoft Authenticator).

Thus I believe I was correctly answering the OP's question AND I don't see any problem with the way Apple does it because in practical terms its no different to anyone else in terms of "backup" for 2FA.

Thus I've no idea what you're claiming to be "fake", and I'm not sure if I want to be drawn into that discussion because it sure sounds like Apple bashing that is not factually supported.


> A co-worker told me to move all my 2FA to Authy as a means to avoid locking 2FA to hardware, but I haven't sufficently looked into it yet.

I've heard this approach of cloud-based second-factor auth keys called "1.5FA". It's probably enough for most people, most of the time.

That said, in the case of a broken phone and away from your computer, you'd still need either a second device that's already logged into your vault or a backup recovery code. That's a good thing for all of us to keep in mind the next time we're traveling.


>"I'm also not a fan of my phone becoming my identity."

I see that many people are slowly moving this direction and just can't fathom why do they fall for this corporate trap.


Convenience. It's the reason for most mass-market uptake.


They don't know any better.


Your options are here btw. Another apple device, text to phone number or account recovery which is automated but takes a few days: https://support.apple.com/en-us/HT204915


>You're forced to have 2 devices

Do you consider a U2F key a device?

And it's not exactly 2. It's n+1, where n is the number of 2FA physical factors you expect to lose/break. If you expect to lose/break 0, then you need 1. If you expect to lose/break 5, then you need 6.


It's strange and rather unfortunate to see this constant reinvention of authentication methods. Asymmetric encryption as used in things like SSH keys and TLS client authentication have been around for decades, are very much standard, and the only changes to those have been stronger algorithms and longer keys. Smartcards as hardware secure elements have also been around for a long time. I'm not sure how much of a conspiracy theory it is to say that things like this are merely attempts by Big Tech to stronghandle everyone into their own idea of "standards" and run away from all the smaller players in the industry, but I'm sure that we had everything necessary for "passwordless authentication" two decades ago, or at least methods in which it's not necessary to send a password to the authenticating server nor store them there.


Its absolutely not a conspiracy theory, but it is a bit more complex than that there is was a coordinated push - there was a big push a while back from the likes of Microsoft to e.g. eradicate ssh credentials - in favor of stuff like AD (ugh, why?), specifically wrt to git clients. I know, GitHub still takes ssh (they'd break too many people otherwise), but places started moving towards AD, or "password manager integration" clients.

Part of that is on the "security contractors", who are objectively snake-oil salesmen (when you make a living selling people publicly, freely available, publicly supported software, and charging 6 figures for it, that is the definition of a swindler), especially since they started propagating their whole "security regimen" as a set of tasteless, mostly useless "security awareness" trainings. They harped a lot on choosing good passwords, caused a lot of bad password security practices on almost every website (I still see this everywhere online - please use 10 characters with one symbol from (!$./ ... etc) and 1 number - no - use entropic password measurement and maybe don't assume your site is important enough to warrrant a high-entropy password).

So, once we were all left with an unsustainable bag of crappy passwords for every buytoothpaste.com website out there... well we all had to try to invent something else. There was SSO OAuth, that failed because it was overcomplex (or got rolled into a banal corporate policy system which was horridly complex to deploy and the security contractors got paid to audit the bad systems).

Then pile on the other heap of bad password strenghtening abstractions (2FA), etc., you get to today. We never had SSH for the browser, GPG/PGP remained meh, so the result is a constant stream of "new solutions" to a problem which could have been solved by a) Not caring as much about passwords, communicate risk to the users instead b) fixing ssl/ssh.

And why did nobody do a) or b)? Again, I blame "security contractors" for a) and b) people not being paid to do it.

Yeah, profit-seekers will always try to capitalize on chaos, that's hardly conspiracy, that's just business.


Perhaps not reinvention but rather repackaging. Web Authentication (on which this is based) is (just) asymmetric encryption in an authentication challenge/response protocol.

It is at an API level, rather than the transport level like SSH and TLS, because applications often often have more complex requirements than these provide. In particular, SSH and mutual TLS typically expect traffic to be authenticated at the transport level on use, and for the credential to exist and be evaluated at first interaction. Websites typically have registration and self-service management functions, as well as anonymous access.

There is also nothing especially new about the use of hardware secure elements, nor was anything new claimed.

I will say as someone who implemented website smartcard-based authentication a decade ago - the experience was typically very poor, because the software stack had not been built for that use case, and often relied on third-party components which were simply sub-par.

There's a lot to be said for reusing technology, but there's also a lot to be said for creating the best possible experience. The MTLS experience that has existed has not gotten any notable consumer adoption for very valid reasons.


Well, there's encryption algorithms, and then there's authentication methods... Don't confuse the two. There's actually been a lot of interesting developments in authentication methods... not the least of which have been the FIDO 2.0/WebAuthn standards. While you might perceive them as plays by Big Tech, smaller companies like Yubikey are kind of at the core of it, and without WebAuthn in particular, it was rather hard to have confidence in browser based authentication. Yes, there were certificates, but the general public has struggled to understand and adopt device certificates in a way that doesn't lead to them being stolen.


There was already a passwordless authentication mechanism in browsers called SSL client certificates. Approximately nothing uses it because it’s hard to use.


My understanding is that this basically the same asymmetric public/private key encryption.

WebAuthN is the standards for defining how this works on the web.

Fido is the alliance (+standards?) for multi-platform interoperability. How do I get my private keys from my iPhone into Chrome on my PC?


> The syncing of passkeys in iCloud solves this backup problem.

But then apple has your keys....


I would assume that they are at least encrypted locally before being uploaded to iCloud. (But yes, Apple could always change things)


Don’t be so certain - we need more details from Apple on this. Last I checked iMessage was still (!) not encrypted when backed up to iCloud.

https://www.howtogeek.com/710509/apples-imessage-is-secure.....


iMessage backups are encrypted, they are just not encrypted as much as some people would like.

In particular, Apple has HSM servers outside their hosting environment for auditable release of encrypted backups. This could be done for a support request for a lost user password or as part of a legal demand (say, family of the deceased seeking access to photo history, or requested by law enforcement with a court order).

The passkeys system uses iCloud Keychain, which is a separate mechanism and is encrypted before being sent to Apple using user-device-private keys. You should need to both get iCloud access _and_ provision a device into the "ring" before you can access passwords or passkeys.


iCloud Keychain is end-to-end encrypted, Messages isn't because Apple took the tradeoff of allowing people to keep their imessage history even upon a support-initiated account reset, which otherwise will wipe your entire iCloud Keychain.


Messages is end-to-end encrypted. The key is stored in iCloud backups if they’re enabled (and if I recall correctly the messages on your device are backed up as part of an iCloud backup as well), but you can turn those backups off.

> [1]For Messages in iCloud, if you have iCloud Backup turned on, your backup includes a copy of the key protecting your messages. This ensures you can recover your messages if you lose access to your Keychain and your trusted devices. When you turn off iCloud Backup, a new key is generated on your device to protect future messages and isn't stored by Apple.

> If you forget your password or device passcode, iCloud Data Recovery Service can help you decrypt your data so you can regain access to your photos, notes, documents, device backups, and more. Data types that are protected by end-to-end encryption—such as your Keychain, Messages, Screen Time, and Health data—are not accessible via iCloud Data Recovery Service. Your device passcodes, which only you know, are required to decrypt and access them. Only you can access this information, and only on devices where you're signed in to iCloud.

[1] https://support.apple.com/en-us/HT202303


Passwords in iCloud Keychain are already E2EE, it seems reasonable the private passkeys would be too.


> iCloud ... backup

> E2EE

If you can lose all your existing devices, and can still restore your data, then that data isn't end to end encrypted.

I'm taking the "end" in e2ee to mean your devices. Nothing but your devices can decrypt your e2ee prospected data. If a new device can enter the circle of trust without an existing device's corporation then there is a backdoor.

I imagine icloud keychain supports synchronization rather than backup


The password stored in your backup via iCloud Keychain use the passcode of your devices as a secondary encryption/lock method, which doesn’t have a password recovery mechanism like the Apple ID used to secure your iCloud backup. Not sure that meets the definition of E2EE but it’s not like the passwords are recoverable by another party (or even you, if you forget the passcode) just because they’re in your iCloud backup.


So maybe I don't get it, but I always understood that 2FA means something you know and something physical you have. Now if I can get they keychain using something I know, does that not somewhat defeat the purpose of 2FA?


In general it's "who you are" (biometrics) as well as "what you have", with the OS being the one ensuring that the phone itself was unlocked and having an extra biometric check when signing in with passkeys; this is how iOS currently works, it pops up face ID before it signs any Webauthn challenges.

Also, ideally, your syncing passkey solution (whether that be 1password or iCloud Keychain) would itself be a combination of multiple factors before you can get in - in the case of iCloud Keychain, 2fa is on by default on your Apple account, and the keychain is also protected by your password plus the passcode of one of your devices. In general this is already immensely more secure than passwords because the website is verifying a signature instead of the correctness of a shared secret. So, it'd still be possible to have 2fa with the first factor being passkey and the second factor perhaps being another physical security key or maybe verification of an email code, but that would likely be reserved to enterprises and high-security applications.

(I assume Apple themselves aren't going passwordless themselves anytime soon, especially with how that'd work on fresh devices).


Typically MFA is something you have (physical possession), along with something you know (secret) or something you are (biometric).

This is more abstract than physical possession of a single device with a non-exfiltratable private key. There are synchronization processes (so its one of many physical devices, on a sync fabric which allows devices to be added).

The process for adding a device should require multiple factors as well, but I believe there ultimately is a typically a recovery mechanism like a printed recovery key which would make this considered single-factor.

However, most deployed 2FA is via SMS, email, or backed-up TOTP today. The goal is to build a much more secure system that is recoverable enough to get consumer adoption, not to try to achieve say NIST 800-63 AAL3.

One ongoing proposal is that you get an additional device-bound factor as well. Seeing a new device-bound factor would let you decide to do additional user verification checks if desired.


Maybe usage of user account password would allow for E2E without any device?


How could one verify that? like for compliance audit?


https://support.apple.com/guide/sccc/introduction-sccccea618...

Introduction to Apple security assurance

As part of our commitment to security, Apple regularly engages with third-party organizations to certify and attest to the security of Apple’s hardware, software, and services. These internationally recognized organizations provide Apple with certifications that align with each major operating system release. …


Are such third parties listed? Can you inspect their reports? What testing methodologies are involved in order to issue such certifications? And can we see such certifications at all?


If you don't trust Apple, why would you trust a third party auditor?

I can't think of any entity I would trust with securing truly sensitive information. For important stuff, do it yourself. For simple things, including bank accounts and such, I see no issue with trusting Apple.


Because you’re trusting both apple and the third party jointly, each of whom have different incentives.

I don’t know I buy the “for truly sensitive stuff do it yourself” line. That’s like saying for the truly lethal substances handle them yourself. Most people aren’t more skilled than the apple security folks. You’re almost certainly going to screw up your encryption or leave some vulnerability unpatched or unknown. Frankly I consider my iOS devices to be some of the most secure systems I have access to, and reading through their security documentation has informed that opinion.


> Because you’re trusting both apple and the third party jointly, each of whom have different incentives.

The cynical view, of course, is that Apple's incentive and the Third Party's incentive can become very much aligned for the right amount of money.


You also have to consider the market value of their reputations jointly as well. It would have to be a huge incentive to risk their reputation, both apples with their security conscious customers and customers with high regulatory burden, and the auditor whose only asset of value is their reputation. Auditors typically poof out of existence (Anderson anyone?)


Trust requires transparency and a published security audit report created by a reputable independent author would definitely increase my trust in Apple because they show that they don't have anything to hide.


Yes, particularly if you have need to. But a lot of the details you mention are findable in that link


iCloud Backups != iCloud syncing iiuc. Passwords/Wifi/etc. syncing is a different E2EE system.

You can disable iCloud Backups and still use iCloud


ahhh so they already have what they need to do iCloud E2EE, they just decide not to use it for your data....


Yup. Probably because law enforcement would be livid if Apple did that. In the San Bernardino terrorist case, Apple basically said triggering an iCloud backup is the best way to get the contents of a locked iPhone. Apple routinely supplies law enforcement with contents of iCloud backup.


It remains one of the clearest examples of law enforcement wanting a friction-free solution for getting data out of iOS without Apple. They could have easily attained the information and have been doing so for years. They were explicitly trying to generate sympathy towards a backdoor solution.

What's sort of surprising to me is how much they overestimated public support for their cause.


Perhaps the clearest way to see what is available is to look at https://www.apple.com/legal/privacy/law-enforcement-guidelin... to see what information is available.

There are plenty of non-LE use cases, such as people who need to recover access after a lost password, as well as families who want access to a deceased family member's information after the fact.

Apple has been (slowly) adding support for other recovery systems and for legacy contacts as first-class features. The UX for this currently lists Apple as a fixed option among a list of other options (such as personal contacts).

I expect long-term that Apple will have access to backup recovery for a number of people as a system default, but not for everyone.


E2EE would have made it significantly harder for Apple to build the web based apps at iCloud.com. Not to say that shouldn’t have though, but I can understand whey they didn’t.


Does anyone use that? It's nice to have when I want to access my data from the web, which is never, and it's not worth the loss of security.

But I imagine the FBU wouldn't like an end-to-end encrypted iCloud Photos at all.


I think the main thing is Find My access, but Apple seems to claim it's E2EE despite being available at icloud.com/find so perhaps it wasn't too complicated; I imagine it stores the plaintext password in memory to access the data.


Matrix has E2EE on the the web... it's kind of different, but you can share pretty large files...


Just imagine the outcry if you forgot your password and lost access to all of your pictures or other data?

If I lost access to my passwords (E2E encrypted), it would be an inconvenience.



It’s end to end encrypted. Apple doesn’t have access.


Same as the don't have access to your iMessage messages ... unless you happen to use iCloud which they purposefully make really inconvenient to not use.


Yes Apple is willingly staying quiet about your iMessage backup being accessible by them. But it does not change the fact that iCloud Keychain is end-to-end encrypted with Apple incapable of accessing your keychain.


When I first learned about the iMessage backups being accessible I was a bit let letdown by Apple. But I never believed they were best in class for privacy. The iPhone's true strengths lie in the OS support and FaceID.

iMessage itself is pretty slick but if I want privacy, I use Signal. It also gives me a crossplatform messenger as I prefer Windows and iOS.

Same as with Microsoft Edge. It's my favorite browser now, but I do give up a bit of privacy for convenience. If I want the best privacy I use Tor Browser. Which I always keep installed with Signal.


I would expect passkeys to be a massive liability for Apple in case they get breached. Why would they even want access to them? Do you think they want access to your accounts?


I think they want to provide the "I forgot my password and lost all my devices" convenience. People hate loosing their data and loosing access to a lot of passwordless services would be a nightmare. In general I'm super weary of anybody who promises to have no access to my account and then offers any reset password functionality.


No it's not. Apple has the key for iMessages. Does NOT have it for Keychain.


iCloud Keychain is end-to-end encrypted.


How will this work on Linux?


FIDO usb devices just use the HID protocol so they work fine on linux. Chrome and Firefox both support them.

I wrote a FIDO implementation that protects the signing key using the system's TPM specifically for linux: https://github.com/psanford/tpm-fido

There is no reason why you couldn't implement a similar syncing strategy in a tool like this if you wanted to.


> just use the HID protocol

This is literally true, and covers what was important in context, but warrants a little extra explanation. Since these devices are specifically for humans to interface with (they typically have a button or contact sensor, though some have keypads or a fingerprint reader) they are logically Human Interface Device class USB devices, but they do not speak the HID Keyboard or Pointing Device sub-protocols like your mouse or keyboard (or the built-in "take a photo" button on your web cam). Instead they provide a FIDO-specific HID sub-protocol, which is publicly documented, instead of operations like "Caps Lock pressed" it's got stuff like "Begin enrolment" or "PIN xxxx entered by the user" which only makes sense for this specific problem.


Oh, thanks for the explanation. I guess this new protocol is why I don't see as many tokens randomly appearing in chat sessions these days.


some yubikey modes of operation do emulate keyboards and paste a string of characters in text fields or terminals


They do, and an earlier version of my post mentioned that but I edited it down.

However, FIDO mode does not speak the keyboard sub-protocol. This means on the one hand it's not useable out of the box with some random device that allows USB keyboard input like the custom Yubico OTP mode is, but on the other hand it's able to deliver a good UX while having excellent security properties that would not be practical using keyboard emulation.


Oh dang, thanks for writing tpm-fido! It works really well for my use case -- avoiding mandatory and incredibly annoying Duo Mobile 2fa on my school's website -- although I tore the presence verification out of the code for my purposes :)

iirc, this relies on the uhid module to mock a physical fido2 key, and I'm not sure if there's a way to present a mock fido2 key OS-wide without relying on a virtual USB device. This was a bit of an issue when I tried setting up a similar fido2 emulator in a container, as the Google container OS doesn't allow loading kernel modules. Do you know if that's still the case, or if there's a way to mock a fido2 key systemwide without uhid?


> there's a way to mock a fido2 key systemwide without uhid?

On Linux, Chrome and Firefox interface directly with the USB and Bluetooth interfaces. There is no OS level abstraction for FIDO devices on Linux.


Isn't this approach significantly less secure than Apple's though? As far as I understand the secure enclave coprocessor in Apple devices stores key material and implements user verification (TouchID etc.), right? Instead software like tpm-fido bridges (in software) a user verification mechanism (maybe even a fingerprint reader) and the system's TPM. But such a system can be interposed with mere root access, and the TPM tricked in giving out its secrets, no? Please correct me if I'm getting it wrong, but Apple's approach is instead resistant even to full kernel compromise, precisely because the communication between TouchID/FaceID and the secure enclave cannot be interposed.

I'm a tpm-fido user myself by the way, thank you psanford!


Yes, having the verification done by the secure enclave itself is more secure. The TPM spec does allow for direct integration with biometric devices, but I'm not aware of any general purpose computers that ship in this configuration.

> TPM tricked in giving out its secrets

To be clear, the key can never leave the TPM (with how tpm-fido is implemented). The threat is an attacker can perform an online attack by getting the TPM to sign messages it shouldn't. But you couldn't steal the key from the TPM and use it somewhere else.

But it doesn't really matter for the Webauthn threat model. An attacker with root access can steal your browser sessions directly.


> To be clear, the key can never leave the TPM (with how tpm-fido is implemented).

Yep sorry you're right you wouldn't get the actual keys to use elsewhere, you can just use them as if you had them on the "compromised" device only, my bad.

> But it doesn't really matter for the Webauthn threat model. An attacker with root access can steal your browser sessions directly.

If you're using WebAuthn to authorize the emission of session tokens you're absolutely right, just get root and steal them from the browser :) but WebAuthn is more versatile than that. You could e.g. require a WebAuthn assertion to authorize a payment. In that case root access still doesn't help you with a secure enclave, but is sufficient to trick your server in believing the user has authorized the operation with tpm-fido, right? Again I absolutely don't mean to detract from tpm-fido, just pointing out that, very sadly, I don't think a TPM+fingerprint reader+software can really replace integrated solutions like Apple's secure enclave, or a yubikey, etc. In general unless I'm mistaken, it's not a tpm-fido shortcoming specifically.


A compromised main UI device could also show the wrong account recipient, even if hardware key is used. The text could be changed on the screen when the user meant to send a small payment to someone else. Yubikey will be pressed like usual. Apple's standard prompt on the phone may not have the recipient shown.


I agree that the secure enclave with integrated touch id is more secure. However, if your threat model is an attacker has root on your system, the secure enclave isn't going to protect you from much.


Is there a software implementation that would work without a TPM module where I can generate the keys myself?


Check out rust-u2f/softu2f[0].

[0]: https://github.com/danstiner/rust-u2f


Ok.. but hear me out here.

What if the only computer (or even the only Apple computer) a user has is an iphone, and someone swipes it?

Surely in that case you're now locked out of literally everything, no?

Please explain to me why this is stupid because I'm certain someone thought of this very early on.


As mentioned, the Passkeys are synced in iCloud, so a lost device doesn't mean the credentials are lost.

If you still can log in to iCloud, you're fine.

If your Apple ID password has been changed, Apple provides a workflow to regain access to your Apple ID [1].

There's also a process for account recovery for situations where you can't access your Apple ID because of two-factor authentication [2].

[1] https://iforgot.apple.com [2] https://support.apple.com/en-us/HT204921


A lot of sites allow you to create backup codes that you can print out and use once instead of the security key.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: