Hacker News new | past | comments | ask | show | jobs | submit login

> I can't say I know how secure or safe this is going to be without examining it in detail myself

I've been working on something about Heads (a minimal Linux-based secure bootloader) since January, too. And I can say the boot verification used by Heads is sound and solid. The implementation is basically a verified/measured boot scheme with TOTP.

During initialization, you generate a random TOTP key, add the key to your TOTP authentication device (e.g. Google Authenticator on mobile phone), and "seal" the key in your TPM, along with your boot "measurements". During the boot process, these "measurements", or the SHA hashes of various information/configuration about hardware, software and firmware, is being passed to the TPM. If the configuration has changed, the TPM will refuse to release the TOTP secret, otherwise, the TOTP key is released, then used to calculate a random number by a shell script in Heads.

If the number on your mobile phone matches with the number on the screen, then it proves the system is not being tampered.

Read the code here:

https://github.com/osresearch/heads/blob/584c07042ef4898de52...

https://github.com/osresearch/heads/blob/584c07042ef4898de52...

(yes, all shell scripts... I'm not sure whether this is a security issue, but this design is probably inspired by the initrd/sysvinit shell scripts)

Obviously, it means every time you boot your computer, it requires you to check the 6-digit code against your phone, before it boots the actual Linux kernel, or enter your full-disk encryption key. To me, Librem Key improves it, by automating the process (as Nitrokey has TOTP functionality already), and using a simple protocol to automate the challenge-response verification.

If you want to learn more, make sure to read about Heads first.

https://github.com/osresearch/heads

This presentation is a good start.

https://trmm.net/Heads_33c3

Attack Vectors:

1. The trustworthiness and security of the TPM. The Free Software community has historically rejected them because of the DRM aspect of "trusted" computing. But to this day, the complete DRM dystopia (where all the proprietary software is running inside Intel Management Engine, and performs DRM-related cryptography in TPM blackboxes, was it called Microsoft Palladium?) didn't turn out to be a threat, fortunately. So now even RMS acknowledged that there is no actual reason to not implementing free software security tools based on TPM.

Another reason is potential backdoor, but even there is a backdoor, using it still improves security, compared to completely unprotected machines. In the future, perhaps there can be Free Hardware TPM, but not in the foreseeable future, and Heads's usage of TPM is still a big step towards security.

2. Completeness of measurements. If some software/hardware changes are not measured, or can be replayed by the attacker, the attack will not be detected. But the measurements are done collaterally by coreboot in early boot, and Heads in later boot, and to me is fairly extensive. Maybe there is still room for attacks, but difficult. Pentesters are always welcomed. BTW, man-in-the-middle attack to the entire verification process is possible, but it only has theoretical interests, as the attacker has to stay in the middle between you and your screen.

3. Another general issue is the security of the TOTP seed, like, if your Google Authenticator is hacked. The problem is somehow mitigated by using a Nitrokey/Librem Key, but the TOTP code is still running on a generic STM32F1 MCU, not the OpenPGP Card. STM32F1 is known for its tamper-nonresistance. But because of NDAs, there is currently no good alternative choices though. But still, just like TPM, I think it greatly improves the current situation so far, let's use it. It still has problems, and in the future we may do better.

4. Automation. Librem Key automates the challenge-response, unlike the original Heads, which prints the code on the screen. In the original Heads, if Heads itself is tampered, it will noticed by the user for incorrect/no code. But with automation, perhaps the attacker has a way to trick the users now? Need to check this.

> and even then i'm an amateur at cryptography stuff

Me too. I'm also working on a similar security token in my spare time. Hopefully, before the New Year, I can submit a Show HN. You may find it's interesting to read.

Finally, all of my descriptions are based on my first-hand impressions, not necessarily facts, and totally unverified. Make sure to check the primary sources!




But to this day, the complete DRM dystopia (where all the proprietary software is running inside Intel Management Engine, and performs DRM-related cryptography in TPM blackboxes, was it called Microsoft Palladium?) didn't turn out to be a threat, fortunately.

That's optimistic. It's reasonable to assume that Intel's Management Engine has been penetrated by NSA, the CIA, the FSB, and the PLA's Third Department. It mostly relies on security through obscurity, which can be overcome with money.


Your point is about mass surveillance and the inauditable backdoor of ME, which is an important security issue by its own, and I'm definitely not optimistic about the situation, especially when Boot Guard rendered removal impossible.

What I was addressing there is a different issue, which was how the general objection of TPM in FOSS community came from - In the original vision of "Trusted Computing" around 2006, it was expected that a TPM and ME-based DRM would prevail in a proprietary system and lock every piece of media, software, and file down.

You can read Lucky Green's presentation from 2002 to understand more about the situation of that time. https://web.archive.org/web/20180416211840/https://cypherpun...

> You could create Word documents that could be read only in the next week

- Steven Levy

> Fritz Hollings Bill: S. 2048: Plug “analog hole” with 2048-bit RSA: Monitor out, Video out, Audio out. Microsoft: Additionally encrypt keyboard input to PC. S. 2048 makes it illegal to sell non-TCPA compliant computers: A $500,000 fine and 5 years in prison for the first offense; double that for each subsequent offense.

But fortunately, THEY were way too optimistic...

> As of 2015, treacherous computing has been implemented for PCs in the form of the “Trusted Platform Module”; however, for practical reasons, the TPM has proved a total failure for the goal of providing a platform for remote attestation to verify Digital Restrictions Management. Thus, companies implement DRM using other methods. At present, “Trusted Platform Modules” are not being used for DRM at all, and there are reasons to think that it will not be feasible to use them for DRM. Ironically, this means that the only current uses of the “Trusted Platform Modules” are the innocent secondary uses—for instance, to verify that no one has surreptitiously changed the system in a computer.

> Therefore, we conclude that the “Trusted Platform Modules” available for PCs are not dangerous, and there is no reason not to include one in a computer or support it in system software.

https://www.gnu.org/philosophy/can-you-trust.en.html


Awesome reply. From the article I didn't have enough information about how this worked to begin guessing at it but this answers everything I needed to know about it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: