> The Librem Key opens up possibilities for even stronger anti-interdiction protection for customers who need it. We will be able to link a Librem Key with a laptop running Heads at our facility and then ship them separately. Then when each package arrives you can immediately test for tampering with an easy “green is good, red is bad” test.
That's really neat, but I'm curious about what interdiction threat models this does or doesn't help with. For example, could the people performing the interdiction have sophisticated enough hardware-tampering capabilities that they could modify the key or extract its secrets and then send it on its way? Could they have a small enough chip that they could place in the USB connector itself to do some kind of malicious thing later on?
I've always assumed if you are a high-value target and a state-level actor, or a sufficiently motivated private entity, knows where high value hardware is being manufactured, or dispatched from, you ought to assume your security is already hosed. Either because they've simply tracked the delivery or tampered with it.
Are we to assume the CIA and it's ilk don't have operatives inside these organisations?
Of course, if you're sufficiently organised, you'd have low level grunts take on all the risk.
Or did staying up for way too long selling meth-amphetamine for the better part of a decade make me way too paranoid?
Looks like... It's just a Nitrokey Pro with different branding, nothing special. But this is how open hardware designs should work, and having multiple vendors is good.
edit: the hardware design is nearly identical, but not the firmware. See my follow-up post.
> One of the most exciting opportunities the Librem Key opens up to us is in integrating with our tamper-evident Heads BIOS to provide cutting-edge tamper-evident security but in a convenient package that doesn’t exist anywhere else.
...
> We have worked with Nitrokey to add a custom feature to our Librem Key firmware specifically for Heads. This custom firmware along with a userspace application allows us to store the shared secret from the TPM on the Librem Key instead of on a phone app. Then when Heads boots, if the BIOS hasn’t been tampered with the TPM will unlock its copy of the shared secret, and Heads will send the 6-digit code over to the Librem Key. If the code matches what the Librem Key itself generated, it flashes a green light. If the codes don’t match, it flashes a red light.
Looks like hardware wise that's true, but from the article they've added what looks like a challenge response type mode that the key informs you about (based off totp?) so that you can say that the bios has validated itself to the key.
We have worked with Nitrokey to add a custom feature to
our Librem Key firmware specifically for Heads. This
custom firmware along with a userspace application allows
us to store the shared secret from the TPM on the Librem
Key instead of on a phone app. Then when Heads boots, if
the BIOS hasn’t been tampered with the TPM will unlock
its copy of the shared secret, and Heads will send the
6-digit code over to the Librem Key. If the code matches
what the Librem Key itself generated, it flashes a green
light. If the codes don’t match, it flashes a red light.
I can't say I know how secure or safe this is going to be without examining it in detail myself (and even then i'm an amateur at cryptography stuff) but it sounds like a good start to things.
> I can't say I know how secure or safe this is going to be without examining it in detail myself
I've been working on something about Heads (a minimal Linux-based secure bootloader) since January, too. And I can say the boot verification used by Heads is sound and solid. The implementation is basically a verified/measured boot scheme with TOTP.
During initialization, you generate a random TOTP key, add the key to your TOTP authentication device (e.g. Google Authenticator on mobile phone), and "seal" the key in your TPM, along with your boot "measurements". During the boot process, these "measurements", or the SHA hashes of various information/configuration about hardware, software and firmware, is being passed to the TPM. If the configuration has changed, the TPM will refuse to release the TOTP secret, otherwise, the TOTP key is released, then used to calculate a random number by a shell script in Heads.
If the number on your mobile phone matches with the number on the screen, then it proves the system is not being tampered.
(yes, all shell scripts... I'm not sure whether this is a security issue, but this design is probably inspired by the initrd/sysvinit shell scripts)
Obviously, it means every time you boot your computer, it requires you to check the 6-digit code against your phone, before it boots the actual Linux kernel, or enter your full-disk encryption key. To me, Librem Key improves it, by automating the process (as Nitrokey has TOTP functionality already), and using a simple protocol to automate the challenge-response verification.
If you want to learn more, make sure to read about Heads first.
1. The trustworthiness and security of the TPM. The Free Software community has historically rejected them because of the DRM aspect of "trusted" computing. But to this day, the complete DRM dystopia (where all the proprietary software is running inside Intel Management Engine, and performs DRM-related cryptography in TPM blackboxes, was it called Microsoft Palladium?) didn't turn out to be a threat, fortunately. So now even RMS acknowledged that there is no actual reason to not implementing free software security tools based on TPM.
Another reason is potential backdoor, but even there is a backdoor, using it still improves security, compared to completely unprotected machines. In the future, perhaps there can be Free Hardware TPM, but not in the foreseeable future, and Heads's usage of TPM is still a big step towards security.
2. Completeness of measurements. If some software/hardware changes are not measured, or can be replayed by the attacker, the attack will not be detected. But the measurements are done collaterally by coreboot in early boot, and Heads in later boot, and to me is fairly extensive. Maybe there is still room for attacks, but difficult. Pentesters are always welcomed. BTW, man-in-the-middle attack to the entire verification process is possible, but it only has theoretical interests, as the attacker has to stay in the middle between you and your screen.
3. Another general issue is the security of the TOTP seed, like, if your Google Authenticator is hacked. The problem is somehow mitigated by using a Nitrokey/Librem Key, but the TOTP code is still running on a generic STM32F1 MCU, not the OpenPGP Card. STM32F1 is known for its tamper-nonresistance. But because of NDAs, there is currently no good alternative choices though. But still, just like TPM, I think it greatly improves the current situation so far, let's use it. It still has problems, and in the future we may do better.
4. Automation. Librem Key automates the challenge-response, unlike the original Heads, which prints the code on the screen. In the original Heads, if Heads itself is tampered, it will noticed by the user for incorrect/no code. But with automation, perhaps the attacker has a way to trick the users now? Need to check this.
> and even then i'm an amateur at cryptography stuff
Me too. I'm also working on a similar security token in my spare time. Hopefully, before the New Year, I can submit a Show HN. You may find it's interesting to read.
Finally, all of my descriptions are based on my first-hand impressions, not necessarily facts, and totally unverified. Make sure to check the primary sources!
But to this day, the complete DRM dystopia (where all the proprietary software is running inside Intel Management Engine, and performs DRM-related cryptography in TPM blackboxes, was it called Microsoft Palladium?) didn't turn out to be a threat, fortunately.
That's optimistic. It's reasonable to assume that Intel's Management Engine has been penetrated by NSA, the CIA, the FSB, and the PLA's Third Department. It mostly relies on security through obscurity, which can be overcome with money.
Your point is about mass surveillance and the inauditable backdoor of ME, which is an important security issue by its own, and I'm definitely not optimistic about the situation, especially when Boot Guard rendered removal impossible.
What I was addressing there is a different issue, which was how the general objection of TPM in FOSS community came from - In the original vision of "Trusted Computing" around 2006, it was expected that a TPM and ME-based DRM would prevail in a proprietary system and lock every piece of media, software, and file down.
> You could create Word documents that could be read only in the next week
- Steven Levy
> Fritz Hollings Bill: S. 2048: Plug “analog hole” with 2048-bit RSA: Monitor out, Video out, Audio out.Microsoft:Additionally encrypt keyboard input to PC.S. 2048 makes it illegal to sell non-TCPA compliant computers: A $500,000 fine and 5 years in prison for the first offense; double that for each subsequent offense.
But fortunately, THEY were way too optimistic...
> As of 2015, treacherous computing has been implemented for PCs in the form of the “Trusted Platform Module”; however, for practical reasons, the TPM has proved a total failure for the goal of providing a platform for remote attestation to verify Digital Restrictions Management. Thus, companies implement DRM using other methods. At present, “Trusted Platform Modules” are not being used for DRM at all, and there are reasons to think that it will not be feasible to use them for DRM. Ironically, this means that the only current uses of the “Trusted Platform Modules” are the innocent secondary uses—for instance, to verify that no one has surreptitiously changed the system in a computer.
> Therefore, we conclude that the “Trusted Platform Modules” available for PCs are not dangerous, and there is no reason not to include one in a computer or support it in system software.
Awesome reply. From the article I didn't have enough information about how this worked to begin guessing at it but this answers everything I needed to know about it.
What exactly makes you say that? What cryptosystems has he designed? What has he published? He's the maintainer of the most important tooling for an older cryptosystem, sure. But, by way of comparison, not everyone who works on a TLS stack for a major OS is a cryptography expert.
I actually don't know whether he is or he isn't an expert, but the likely basis for the claim that he is --- maintainer of GnuPG --- isn't a valid one. People have funny ideas of where crypto expertise comes from.
(Disclaimer, or something: I consistently and reliably tell clients that while I'm comfortable testing certain cryptosystems, I am not qualified to design them. ['lvh, one of my partners, is, but he has formal training and I don't.])
I said it because he does cryptography implementation professionally, if perhaps for insufficient reward. I'm sorry that's used to cast aspersions about him by attacking something I didn't say. Why exactly is Koch not qualified to consult on applications of OpenPGP as the main author of GnuPG (which I guess Purism are actually using)?
As I said, lots of people do cryptography implementation work professionally without being cryptography experts. In fact, I think that probably describes most implementors. So you haven't answered my question or supported the claim you made earlier.
This parent comment looks like a refutation. But the entire thread never claimed Werner Koch was an "amateur", and in fact didn't even mention him, so I don't know what is the point of the parent comment. If it's not a refutation, but making a point about its security, it is better to phrase positively, "Werner Koch is an expert", to make the point clear.
The Nitrokey Pro this is based on doesn't do U2F, according to Nitrokey's literature and reviews of the product. Does this custom firmware somehow add a U2F capability?
If it doesn't, I don't understand the product. It's more expensive than a Yubikey 4, and bigger, and the Y4 does U2F along with the smard card RSA stuff.
We use Y4s in our practice, and my experience is that for every time I use the smart card RSA stuff (for instance, to SSH into something with a long-term RSA key), I use the U2F feature 10 times.
Yes, I was also curious about the features as the price seems pretty high if this was just for authentication.
I’d also be curious about the level of certification, if any, that they can achieve by being open.
Disclaimer: I’m working on Solo (https://solokeys.com), an open source fido2 security key. Cost will be $15 and less during Kickstarter, and we’ll apply for FIDO2 certification at the next round in mid November.
The thread is actually better than the linked post because it tells you about ways you can (relatively easily) set up systems without long-term ssh keys.
Right, there's an interesting split in modernizing/hardening SSH philosophies:
Philosophy 1: Move to hardware SSH keys. That's what Y4s and Nitrokeys represent: keys where, if your machine is compromised, the SSH key itself can't easily be stolen.
Philosophy 2: Move to SSH CAs that issue short-lived certificates, and use U2F to authenticate issuance. This is roughly how you'd do a modern integration of an SSO system (like Okta or GSuite) with your SSH services.
SSH CAs have more steam behind them, and are desirable for a bunch of reasons that hardware SSH keys don't really address.
Since the primary function (beyond U2F, which these keys don't do) of a hardware key is to protect your SSH key, I guess it's worth considering how they fit into the modern SSH worldview or whatever you want to call it.
Isn't using U2F+SSH CA's just moving the challenge-response protocol one level up?
U2F is "just" a hardware P-256 key (or a set of keys unique to origin, but ignoring privacy issues they are comparable, of course I'm assuming the same setup so touch-to-use set to required).
(I've set up SSH CAs manually and I'm familiar with low level U2F for that matter).
Another way to put this is that the only Nitrokey product that claims U2F support right now is one that doesn't do OpenPGP, and uses a lesser hardware stack.
If this does U2F, I'd be interested in knowing. I'm not saying it's impossible.
I just wanted the evidence for the assertion that this is based on Nitrokey Pro. I don't understand what U2F has to do with its advertised purpose. Obviously it's a plus if you can use it for other things, and that doesn't compromise the implementation, but there doesn't seem to be a reason it should.
The YubiKey Nano is meant to be left constantly in the USB drive, essentially turning your entire laptop into a key. That doesn't work if the key is meant to be used to unlock the laptop.
I love the Nitrokey products and recommend everyone to try them out. The Nitrokey Start is a fully open source GnuPG token just in software (gnuk). Their pro and hsm rely on a smartcard, of which the code is not fully open (based on Smartcard-HSM for the Nitro key HSM).
It depends what you use it for, and how you set it up. Aside from the Heads/Librem specific features, this is basically the same as a NitroKey/YubiKey. So:
- For use as a key storage device / GPG smartcard, you should have the usual contingencies in place (e.g. backups of decryption keys, alternative signing/auth keys). Only GPG nerds are likely to use this feature.
- For MFA use, you can list an additional device as another acceptable factor. E.g. a second key, or an authenticatior app on your phone.
The Heads boot validation stuff is non-blocking; you can still boot into a system without verifying the boot partition/BIOS. Alternatively, there’s no reason you couldn’t fall back to TOTP on a phone, though I’m not sure if the interface supports that currently.
Source: I put everything on a YubiKey, then lost it.
You generally do one (or both) of two things with hardware security devices:
* You buy multiple devices and configure your systems to honor both, as a backup plan.
* You back up your keys or their artifacts to paper or a small drive and keep that somewhere safe.
If you do neither of those things, and you lose the hardware key, you are either (a) boned or (b) using an insecure system where the key is just theater.
Isn't the point of this device that it generates the private key onboard and never divulges it? Or does it have a one-way lever you have to pull at the beginning before which it's possible to sync the key to another device or download and print it?
You can literally export the RSA key from some of these keys (for backup purposes), but you can also just enroll multiple keys in whatever system you're using them with (or, you can enroll a software-based key, which you then keep only on a disconnected storage device).
Most devices offer a backup mechanism where the key material is encrypted and can only be imported in a same type device. For HSM'S this is called wrapping a slots key material with a wrap key. With gnupg you are allowed to export the key, or generate is on your computer and transfer it to the device.
Some services allow you to configure multiple separate keys, so different private keys. Lastpass for example.
FYI, if you scan those QR codes (plenty of phone apps for this) you can get the text of the secret and save that somewhere. Much easier to work with than a picture.
TOTP/HOTP can still be sort of trivially MITM'd - a phishing form could ask for your auth code and then turn around and supply it to the site in question.
U2F uses a challenge-response protocol that should (in theory) make it impossible to MITM. Google has said that they have had zero successful phishing attacks since they switched to using U2F devices (I think Yubikeys) for all employees.
I agree. I would rather use my phone then a separate hardware device especially since I use a lot of computers. I might look into something like this https://krypt.co/
Some of the other posts have already picked up that this doesn't appear to do U2F. Along that line, does anyone know if there are any new FIDO2 compliant keys are the market is is still just Yubikey? Are more sites adopting Webauthn or is it stagnant?
Will that be FIDO2 compliant? I was under the impression that it's using the older standard that does not allow a per-account identifier to be stored on the device:
To be sure, I'm not sure if it's that important. Mostly, I'm trying to understand where the market is going to settle and who is making devices for that market.
You know what: it looks like I'm wrong; the product comparison page for Nitrokey says the pro doesn't do U2F; only their FIDO U2F key (which uses a less capable processor that doesn't do OpenPGP) does.
I don't know about specific MTBF rates, but all hardware eventually goes bad or gets damaged. The standard protection against failure ( and loss, and theft ) is to register multiple devices with your accounts and physically secure the backups.
That's really neat, but I'm curious about what interdiction threat models this does or doesn't help with. For example, could the people performing the interdiction have sophisticated enough hardware-tampering capabilities that they could modify the key or extract its secrets and then send it on its way? Could they have a small enough chip that they could place in the USB connector itself to do some kind of malicious thing later on?