Hacker News new | past | comments | ask | show | jobs | submit login
CVE-2021-3011: Key recovery on Google Titan Key (ninjalab.io)
237 points by hexa- on Jan 7, 2021 | hide | past | favorite | 80 comments



Note, Google in typical fashion has named 6+ products "Titan." (Titan M, Titan C, Titan Security Key (available in USB A, C, Bluetooth versions), Titan Security Module, OpenTitan, and maybe a few more if you count the old Bluetooth versions that were recalled that look identical to the new Bluetooth version).

The various Titan Security Keys are also made by Feitian who sometimes use the same auth chip and sometimes don't but externally look identical.

The products sole purpose is to establish a secure chain of trust and starts out the gate broken with ambiguous or misleading claims for verifying exactly which Titan it is.

Google will pay you $1 million to hack the Titan but not the Titan hacked here - the other Titan[1]. Furthermore they are happy to tell you that their products, like Google Cloud Platform, are "Secured by Titan" but not which Titan [2].

This is frustrating because the Titan M is an absolutely brilliant device, with some real advancements to normalize embedded security, including an SPI interposer to monitor communications (a real leap forward) - and should not at all be conflated with a generic, whitelabeled, non-hsm product that makes no claims whatsoever and has been broken at least twice before [3] [4]. The Titan C is an even bigger improvement over the Titan M but not in anyway they care to disclose which may or may not indicate weaknesses in Titan M [5]. Likewise, OpenTitan[6] is crashing through barriers others didn't even know were there in establishing verifiable silicon roots of trust but is ambiguously different than Titan M because of various foundry and PDK issues which may be as innocuous as having to run the chips through at different process sizes but who knows because while OpenTitan is verifiable; Titan M/C aren't.

[1] https://duo.com/decipher/hack-the-titan-m-get-usd1-million

[2] https://cloud.google.com/blog/products/gcp/titan-in-depth-se...

[3] http://www.hexview.com/~scl/titan/ - note the migration from the NXP A7005a to A7005c

[4] https://www.engadget.com/2019-05-15-google-recalls-some-tita...

[5] https://showcase.withgoogle.com/titan-c/

[6] https://opentitan.org/


I still don't understand which titan keys I have and whether this affects them.


Titan on Pixel -> OK

Titan BT or NFC -> Physically not OK, but remote attacks still impossible so unless you're targeted and somehow got access to your fob, it doesn't matter.


Thanks


So the titan security keys sold which imply to be secure because they are from google and not that secure? Not secure enough enough to allow bounties for hacking?

It's nice they have a secure chip (titan m) like the secure enclave of apple. But the security keys imply more sense of security as there are not running a lot more apps on this device like on a smartphone.


security by obscurity bro


> Our work describes a side-channel attack that targets the Google Titan Security Key’s secure element (the NXP A700X chip) by the observation of its local electromagnetic radiations during ECDSA signatures (the core cryptographic operation of the FIDO U2F protocol). In other words, an attacker can create a clone of a legitimate Google Titan Security Key.

This is a wildly impressive vuln to discover. Cheers to these guys. Holy hell.


Indeed it is impressive but not wildly so. Most consumer-ish hardware falls prey to all sorts of various TEMPEST attacks. You can even get started with a HackRF and some inductive loop antennas. It would set you back $130 or so on ebay.

From there, you would have to establish some sort of baseline - that would be the hard part. Once done, you're going to be dealing with amplitude based signals (2ASK primarily). The next step is to determine the frequency the device is running at, and tune to it or 2nd or 3rd harmonics.

From there, it's getting the signal out of the noise, and decoding it for the win.

I've done it a few times. Sorry, I don't have a CVE to my name.


> observation of its local electromagnetic radiations during ECDSA signatures

Is there any hardware which is invulnerable to this type of observation?


Physics-wise in an ideal world, no. But it is possible and should be designed (like using better shielding and improving tamper resistance) so that the released radiation is minimal to the point that it is indistinguishable from noise. These are actually very possible: radiation-hardened parts are hardend against out to in radiation but usually also blocks in to out EM radiation.


Even with this problem, using the keys for U2F is safer than SMS two factor auth. Possibly also safer than authentication app on phone, which could be compromised in various ways.


To be clear, "this problem" requires the attacker to have sophisticated equipment with physical access to your key for a significant amount of time. So yes still by far the most secure way, right below a non-clonable key.


But that significant amount of time could have been in the supply chain prior to your acquiring it.

attributing who got what would be a challenge though.


That doesn't work.

The attack recovers an ECDSA private key for one account. So e.g. maybe your Google account. But this key does not exist when you receive the Titan in its packaging, it's created (randomly) when you enroll the key for your Google account.

These devices create entirely random ECDSA private keys for every single enrollment, and this attack recovers one key, using a real challenge from the relying party for that key. If they want your GitHub, or Facebook or your US government account, those have separate keys which need a separate attack.


It's been a while since I've read the U2F spec and my info may be a major version out of date but I understand that the enrollment-specific key was encrypted to the long term device key then returned to the service for storage.

The attack to mount would be against the long-term device-specific key, no?


You are correct about how this works. I think the "Side Journey To Titan" paper makes it obvious the authors also understand how it works.

So if you can magically summon working attacks, you would choose the symmetric AES key yes.

One conclusion you could draw from this paper is the authors are idiots and didn't realise they should attack that key or else didn't have the relevant expertise to do so.

Another, I suspect far more likely conclusion is that protecting AES keys in dedicated security hardware is a problem lots of people already put effort into and these researchers wisely concluded they wouldn't get any traction there because this is a standard component.


Does the Titan key not allow you to regenerate the key? That normally should be the first step after getting a hardware key. Yubikey definitely allows a full reset.


Much safer than a TOTP authentication app, which is susceptible to phishing attacks, unlike U2F.


Not just phishing attacks.

Compared to TOTP, U2F uses asymmetric cryptography to avoid using a shared secret design, which strengthens authentication against server-side attacks. Hardware U2F also sequesters the client secret in a dedicated single-purpose device, which even given the vulnerability described here still has a tiny fraction of the attack surface of a TOTP app and its general purpose host device.


I had to switch back from Yubikey to TOTP because AWS' CLI tools doesn't work with U2F. This really annoys me.


At risk of telling you something you already know: you can use the TOTP mode on the Yubikey, if you’re looking to use it for AWS secrets despite AWS’s lack of support for U2F for CLI workflows.

That at least keeps more of your MFA key material on the hardware token and off of your phone / other shared devices.

The easiest way to do that is via the ykman CLI or Yubico Authenticator application (TOTP secrets stored on the key via either method go to the same place, so you can use both interfaces to access the same codes):

https://support.yubico.com/hc/en-us/articles/360016614940-Yu...

https://www.yubico.com/products/services-software/download/y...


I've been meaning to buy a Yubikey. What is the best practice for using a security key? Is there a mechanism for backing my keys up somewhere safe so that a loss of key doesn't mean a loss of my accounts?


The main idea with a security token is that you can not get the keys out of them.[ß]

So for a truly secure and reliable setup, get three. Enroll them all as parallel 2FA tokens. Keep one with you, one in a relatively easily accessible but non-obvious place, and one in a safe or bank deposit box. That way when the one you have with you breaks or you lose it, promote the secondary to your primary and order a new one to replace the promoted one.

The third is your emergency backup, for when both normally needed keys are destroyed or lost.

Now of course, this only works when the accounts you want to secure allow to enroll more than one FIDO2 token. Which is, sadly, not the most common setup still. For instance AWS only allows to enroll one 2FA token per account.

ß: Some functionality modes allow to extract private keys by design.


I keep one always plugged into my computer (like a Nano model), and one on my keychain. You don't usually need more than that as there are ideally other ways to recover your account (printed recovery keys etc).

If your laptop gets stolen with key inserted, and you didn't have time to invalidate the key, one still has to access your local account, and find out saved login information in order to leverage that key, and that's until you notice that your computer's stolen and invalidated your key everywhere. Otherwise, it's just another random key for the thief.

I don't find that part of my threat model, and I've got my laptop stolen before with key plugged in.


I have a similar setup - Nano 5C on laptop, 5C NFC on keychain (for use with iPad or iPhone), and a third one in a safe deposit box.

I use them for services like Google, but also for SSH keys. (Since 8.2, OpenSSH has built-in U2F support.)


The SSH setup with a Nano and a laptop is pretty neat, in fact. Once you get it going. For a desktop it wouldn't work as smoothly thanks to the touch-for-every-auth requirement.

Even with the well-known document (by HN regular StavrosK) at hand, you can have a confusing experience getting the resident keys going at first. So I put together something to hopefully help people out: https://bostik.iki.fi/aivoituksia/projects/yubikey-ssh.html

FWIW, when I was working on the draft version, searching for the special error code brought up only three pages in Google, and only one of them was actually helpful. At least in my filter bubble.

PS. I am aware of Filippo's yubikey-agent, which AFAIU uses PIV instead of FIDO2. Looking into that will be for the future.


Is it considered a no-no to use it with a password manager for other accounts that I consider less critical? I was thinking that for most accounts, I would use the password manager, but use 2FA for the password manager. My primary email account that everything links to, would just be 2FA.

Meaning, I would only really have to remember two strong passwords. The rest would be strong passwords, but without 2FA, and easily changeable without forcing myself to remember yet-another password and which account it belongs to.


Why not use 2fa on all sites which allow it?


Can you create two dummy accounts that you give full admin access to the first account? It's kind of a dumb hack but seems straightforward?


Buy 3 keys, keep one with yourself, one at home and one in a distant relative's basement. Preferably the latter two in fireproof safes.


Buy 2. Put one in offsite location (e.g. your notary). I don't have a notary; I got one always in my pocket, and the other one at home in a safe (pickable though). YMMV.


For 99% of humans, an attacker breaking into their home to pick their safe lock is not part of their digital threat model, so that’s pretty sane.


Correct, though could be an insider (friend of a child for example). If my house burns down and my trousers with it I'm toast.


You may want to look into AWS Single Sign-On. In may not be available in the region you're mostly using but that's not necessarily an issue [0].

The service itself is free but requires an identity provider. If you already have a compatible one, you can use it at no additional cost. Otherwise, you'll have to pay for the IdP.

This setup allows you to offload MFA handling to your main IdP with the added bonus of using the same method of authentication, possibly integrated into your OS (for example if using Windows Hello / AzureAD).

At work, we use Azure AD as the IdP for AWS SSO and it works fairly well, aside from Azure's crappy (inexistent) support of security keys outside of Windows.

There is one gotcha with an easy workaround: the SDKs don't usually support the login part of the SSO flow, and sometimes don't support it at all (terraform comes to mind). To work around this, I'm aware of two tools you can use:

* aws-vault [1], which I personally use and works great for setting the required environment variables, no need to actually have it handle any sort of key

* aws-sso-util [2], which I've seen recommended but never tried

---

[0] It may be an issue if you need to use the managed ActiveDirectory service, which needs to be in the same region

[1] https://github.com/99designs/aws-vault

[2] https://github.com/benkehoe/aws-sso-util


AWS' support for security keys is crap in general. For example, you can only have a single key per account so it's impossible to have a backup key in case you lose your primary key.


Hum... I'm the processo to add Yubikey or any U2F to my AWS account, can you share any advice, feedback? Risks, etc?


AWS is crap in this regard, because you can only have a single 2FA device registered for your root account.

That means if you lose that 2FA device, you're hosed. Unless you can convince support to turn off your account's 2FA which one would hope they wouldn't do under any circumstances.

Instead, you should use TOTP with a mobile app. That way you can make a backup by enrolling several phones with the same QR code - or printing out the QR code and storing it somewhere safe.


Alternatively don't use u2f on your root account, just on a standard user. And never touch the root account for anything other than provisioning users.


Or use two security keys, one on your root user and another for your own personal IAM user, but you're right, don't use your root user for everyday tasks.


At least they acknowledged that exploiting this requires physical access to the key, expensive equipment, etc. and on balance is not so realistic for most people that they should stop using the key.


Just curious if anyone knows how long the 4000-6000 observations required would take on this particular device?


Quoting https://arstechnica.com/information-technology/2021/01/hacke... :

> Extracting and later resealing the chip takes about four hours. It takes another six hours to take measurements for each account the attacker wants to hack. In other words, the process would take 10 hours to clone the key for a single account, 16 hours to clone a key for two accounts, and 22 hours for three accounts.


"It allows attackers to extract the ECDSA private key after extensive physical access"

So if you have physical access of the device is it an issue?


“ 6000 observations”. I guess a single whole night can be enough. You sleep at a hotel, your key fob is cloned.


If by "you" meaning the owner, than it appears not. That would be a remote vulnerability.



Source of what? If it's the PDF, sure, but the post is the reporters' summary.

(For future reference

Posted link as of comment: https://ninjalab.io/a-side-journey-to-titan/

First-party PDF: https://ninjalab.io/wp-content/uploads/2021/01/a_side_journe...)


My link is to the CVE official site.


Thanks, at least this page loads.


It looks like all hardware 2FA keys with NXP A700X chip are affected.


List of products affected mentions "Yubico Yubikey Neo" as vulnerable too


It probably shares the secure element (hardware).

But that doesn't mean that the new Yubikeys (Series 5) are not affected. Just that they are not know to be affected.

I hope Yubico will make a follow up post about weather or not other Yubikeys are affected too.

But then given what is needed to use this exploit, it probably doesn't matter for many people.


Yes. The general idea of the attack is always going to be possible in principle. What happened here is they demonstrated they can actually do it in practice, and they gave us some parameters for how easy/hard it was. Other devices can (and in future should) make it harder than this, but it's never truly going to be impossible.

One thing I like very much about Security Keys is that the intuitive experience with ordinary physical keys applies. The idea that if someone stole your key that's bad makes sense.


I'll just add to the sibling comment with one educated guess. Since the attack requires recording of approximately 6k U2F auth operations, we can quite easily calculate the minimum wall time.

From a purely anecdotal experience, it takes between 1 and 2 seconds to "cycle" a YubiKey from a working keypress to the next working keypress. The delay is probably built in to the firmware to mitigate attacks like this. Let's be conservative and say you can run a U2F auth operation every second.

6000 * 1s = 1h40m. That's how long an attacker would have to have the key in their possession to generate enough material to run the rest of the attack offline. So perfectly doable as an evil maid attack with enough specialised gear. Infeasible as a drive-by attack.


For many years now, the brazilian government allowed both citizens and companies to acquire crypto usb keys tied to their identities that can digitally sign legally binding documents.

This is one of the commonly used devices, which has a NXP P5CC081 chip: https://www.usmartcards.com/downloads/dl/file/id/156/product...

I wonder if similar attacks could be applied to these keys, and what would be the implications.


I recently rolled out smartcard SSH authentication via PIV on Yubikey NEOs. Since the attack requires a few thousand observations, I’m still quite safe, right? An attacker would still need to know the PIV PIN.


The attacker needs physical access to your Yubikey NEO and to then run a few thousand observations. Using a U2F dongle is still MUCH better than many other types of 2 factor authentication.

My family are enrolled in Google Advanced Protection and some of our U2F dongles are the affected Titan keys. I'm not at all concerned and am not rushing out to switch to different dongles.


This specific attack doesn't impact your usage scenario. It is impossible to say with certainty whether a hypothetical attacker, who had stolen one of the NEOs enrolled in your system, and had suitable lab equipment, could conduct a similar attack to recover authentication credentials from the NEO if they stolen the PIV PIN. Perhaps, perhaps not.

In general you should not be worried about this, it is unlikely you are so well defended that "Buy this lab equipment, hire an expert, and then steal someone's Yubikey" is the most viable attack, so time spent figuring where the low hanging fruit is will be better than worrying about this.


It only affects ECDSA, if it affected RSA or general smartcard security like PIN access it would be an earth-shattering story since it would affect SIM cards, banking cards, satellite CAM cards, you name it. That's why any talk about cloning should't be so casual and misleading, it promotes FUD.


> an attacker can create a clone of a legitimate Google Titan Security Key

Seems like quite a leap, from ECDSA implementation vulnerability which allows you to reconstruct ECDSA private key to claiming to be able to clone the whole device.

As far as I know on those Feitian NFC K9 fobs U2F is implemented as an applet, so that's just one applet out of several. No mention of RSA at all. E.g. I have a 'dev' version of it, it doesn't have U2F applet installed, but I can install others.


Something that just crossed my mind is whether this method is destructive or not. Is it possible to steal the key, read it, then give it back to the owner?


The article says they are "cloning" the key, which would imply that yes.


They aren't cloning shit though. They are getting single ECDSA private key after six thousand operations or something.


Their method was destructive to the outer casing, because it was easier. If you wanted to clone a key it is likely you could find a way to either avoid destroying the outer packaging or replace it with an apparently identical case, at some expense. They don't necessarily need to damage the actual chip (although they did trash at least one during R&D)


I'm very glad that they mentioned this attack is only applicable under a very specific threat model.


Is there a wallet product out there for keeping a Yubikey or titan safe from physical harm as well as stray readers? Or is that just silly at this time? I've seen a pocket knife like wallet on Walmart. Would be nice if it were lined with metal mesh.


Very impressive cross-disciplinary research, combining chip decapping for side-channel probing, reverse engineering, machine learning (albeit unsuccessful?) and a cryptographic attack. Kudos.


There is a list of other affected keys at the bottom of the article.


My biggest gripe on the security keys are that I want to use them daily. By having something that is so infrequently used... How do I know it works?


This is really excellent work! Very in-depth but clear to read.

I hope this will be taken into account into future products, as of course hardware is hard to fix.


1. Can people pleas stop using light gray text, low contrast text is a major accessibility issue.

Besides that while it does sound bad and probably is bad for some companies using this chips for high security (e.g. Google itself) for many users it lukily will most likely never matter.

Now I'm wondering if my Yubikey is affected? While they list the Yubikey Neo the Yubikey 5* products are not listed.


Unlike the Neo, the Yubikey 5 uses a chip from Infineon: http://www.hexview.com/~scl/neo5/.


Why does this blog have a loading bar that gets stuck around 99% for me? Every request has loaded and is cached by my browser, yet it hangs at 99% artificially for like 30s.


If you do client side “rendering” (which means that you get page content from the server in json format, and generate the html from javascript), you have to show a placeholder until the webpage content gets generated. Otherwise the user sees an incomplete page with elements jumping around for a fraction of a second on load.

But the placeholder might get stuck if anything goes wrong.

Personally, I hate it. Better to use static html for blog content.


It's this new frontend craze of putting artificial loading bars as a placebo effect. Github does it, for example. The site is just really slow, and the status bars are rarely, if ever, hooked up to the actual loading metrics.


I don't know why it does that, but I looked at it and it's using something called "Pace Progress Bar".

Source for it: https://github.com/CodeByZach/pace


That's all I see as well. Fortunately the direct CVE link works.


How is this a CVE? Lots of stuff leak through side channels. I don’t get it.


A secure element is supposed to prevent key recovery, if you can this is a problem, so CVE.


Why would that be relevant to if it gets a CVE or not?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: