Hacker News new | past | comments | ask | show | jobs | submit login

It sounds like the biggest contributory problems here are:

1. Allowing unattended/automatic BIOS updates from a running OS at all

2. Being so paranoid about attacks by a spy with physical access to the computer that the keys cannot be replaced or revoked

I'm not a security researcher, but to just shoot the breeze a bit, imagine:

1. The OS can only enqueue data for a proposed BIOS update, actually applying it requires probable-human intervention. For example, reboot into the currently-trusted BIOS, and wait for the user to type some random text shown on the screen to confirm. That loop prevents auto-typing by a malicious USB stick pretending to be a keyboard, etc.

2. Allow physical access to change crypto keys etc, but instead focus on making it easy to audit and detect when it has happened. For example, if you are worried Russian agents will intercept a laptop being repaired and deep-rootkit it, press a motherboard button and record a values from a little LED display, values that are guaranteed to change if someone alters the key set and/or puts on a new signed BIOS. If you're worried, they'll simply replace the chipwork itself, then you'd need a way to issue a challenge and see a signed verifiable response.




Platform keys can be replaced given physical access to the computer. In fact they can generally be replaced by regular UEFI updates.

The problem here is in trusting, nay expecting, your average motherboard maker to either know anything about key management or give a shit about key management.


Not any less reasonable than expecting a mechanic to be competent and knowledgeable with brakes.


... except that in my experience most mechanics are competent with brakes, and most motherboard makers are not competent with cryptography, or indeed with anything having to do with software.

The auto repair industry has certain standards, and the computer industry... doesn't. In fact, the computer industry does everything it can to insulate itself from any kind of responsibility.


> most mechanics are competent with brakes

Because if they aren't and something happens the mechanic is the one who ends up rotting in a cell. Put the same penalties in place for ODMs and OEMs, mandating that machine owners absolutely always can change the locks to their own property, and mysteriously every single problem we have ever seen with secure boot is no longer some obscure inevitable unavoidable technology issue.


Exactly why I do my own brakes.

And why I want to control my own keys.


Luckily, you can ignore the factory keys and load your own. This issue affects the default configuration, from what I can tell loading in your own PK will override the built-in ones.


I was thinking about this too, thinking about the TPM 2.0 configuration of some machines. However, the keys used by TPM are not the "platform key".

> from what I can tell loading in your own PK will override the built-in ones

How can one go about doing this? If you have any resources that can show how, please share them. The public key of the "platform key" is "fused" into the hardware, is it not?


> And why I want to control my own keys.

Such as the keys to one's own house.


> Allow physical access to change crypto keys etc, but instead focus on making it easy to audit and detect when it has happened.

Shooting the breeze as well...

Have some (non-modifiable, non-updatable) portion of the firmware that, on boot, calculates a checksum or hash of the important bits at the beginning of the chain of trust (efi vars, bios).

Then have it generate some sort of visualization of the hash (thinking something like gravatar/robohash) and draw it in the corner of the screen. Would need some way to prevent anything else from drawing that section of the screen until you're past that stage of boot.

That way every time you boot your computer you're gonna see, say, a smiling blue kitten with a red bow on its head. Until someone changes your platform key / key exchanges or installs a modified bios, and now suddenly you turn the computer on and it's a pink kitten with gray polka dots.

That way you don't have to actively _try_ and check the validity. It'd be very obvious and noticeable when something was different.


I think the weakness comes if someone can predict or infer what the current display is, and then craft a malicious update that generates something visually similar enough to pass unnoticed.

Perhaps the kitten's bow is pink, instead of red, etc. Even a little bit of wiggle room makes the attacker's job a lot easier, much like the difference between creating something that resolves to a known SHA256 hash versus something which matches a majority but not all of the bits.

A simpler approach would be for the small piece of trusted code to discard and replace the hash/representation With a completely new sufficiently-different one whenever anything changes.


This fails to consider the possibility that the display hardware will be tampered with. It also does not consider if a copy of the picture is made and is then displayed by a separate program that pretends that the booting is slower than it actually is.

> Would need some way to prevent anything else from drawing that section of the screen until you're past that stage of boot.

It might need to prevent drawing anything on the entire screen. Otherwise a program might be able to modify the resolution, refresh rate, etc, to try to hide the picture or to display a different one.


I think that this is part of the way to do it, but not all of it. I might consider:

0. All of the BIOS code and other hardware code should be FOSS. This should be printed in the manual as well. A simple assembly language might be preferable, and if the hex codes are also printed next to it, they can also be entered manually if necessary.

1. The operating system cannot update the BIOS at all. To do so requires to set a physical switch inside of the computer which disables the write protection of the BIOS memory, and also disallows the operating system from automatically starting.

2. Require keyboards, etc to be connected to dedicated ports, not to arbitrary USB ports. (This is possible with USB but is a bit difficult; PS/2 would be better.)

3. You can program it manually (whether or not the BIOS memory is write protected) without starting the operating system (this makes the computer useful even if no operating system is installed); perhaps with an implementation of Forth. When BIOS memory is write enabled, then such a program may be used to copy data from the hard drive to the BIOS memory.

4. Like you mention, it should make it easy to audit and detect when keys have been changed. An included display might normally display other stuff (e.g. boot state, temperature measurement, etc), but a switch can be used to display a cryptographic hash. If you always fill all of the memory (even if part of it would not otherwise be used) then it can be difficult to tamper with in the case of an unknown vulnerability.

5. I had seen suggestion to add glitter and take a picture of it, to detect physical tampering. This can help in order to avoid alterations of the verifications themself. If it is desirable, you can have multiple compartments which can be sealed separately, each one with the glitter. If some of these compartments are internal, a transparent case around some of them might help in some ways (as well as to detect other problems with the computer that are not related to security).

However, even the above stuff will need to be done correctly to avoid some problems, since you will have to consider what is being tampered with. (You might also consider the use of power analysis to detect the addition of extra hardware, and the external power can then be isolated (and a surge protector added) to mitigate others attacking your system with power analysis and to sometimes mitigate problems with the power causing the computer to malfunction.)


0. Most of the UEFI is already open source. See TianoCore.

1. There are some things that may need to be updated from time to time that need to be applied before the OS is loaded - microcode updates being one of these. I would still like a physical write-enable switch.

2. Making a keyboard that is not a real keyboard is easier than ever with things like Arduino and Raspberry Pi, and it doesn't matter the interface. There is probably not a way to verify physical presence that can't be duplicated remotely. At some point humanity has to get beyond the primitive mentality of "this stuff on a computer monitor/from a speaker looks/sounds just like real stuff so it is the real stuff" and we have to accept that computers are machines and not in and of themselves a proxy for reality unless specifically considered so.

3. Funny, the original 1981 PC booted to ROM BASIC if it couldn't boot off of anything, so it was useful without an OS. I really wish UEFI firmware was on a replaceable SD card and the system would literally have no firmware if it was not present. I would pay the 2 cents more it would cost OEMs. With all the capability in modern chipsets I feel like this would be trivial to do.

4. Good idea. I wish computers had a separate display that is attached through some legacy interface like RS-232 and that doesn't go through VGA at all for this purpose, like a cheap LCD screen.

5. The old punched cards were very low density, but had one really nice property: you could physically see the data with nothing more than your eyes. It's funny that a stack of punched cards could potentially be more secure than millions of instructions of code hidden in a NAND or ROM that you cannot see or verify except with another device that you also have to trust and run on a platform you trust. Even then you can't really see the bits on a NAND or ROM without special expensive equipment. It'd be cool if there could be a high-density storage device where the binary contents are somehow physically viewable and discernable without a CPU needed. Something like QR codes but much, much more high density.


1. Yes. However, disallowing the operating system from automatically starting does not mean that the operating system cannot be started at all. If you deliberately want the operating system to add microcode updates like that, then you can perhaps type "AUTOBOOT" (or whatever the appropriate command is) at the Forth prompt that comes up when the write-enable switch is activated (or, if you don't like that, you can instead write the code to read the microcode updates from a disk, verify their cryptographic hash, and then apply them). FOSS microcode updates would also help with the security issues when doing so.

2. This is true, and can be useful in some circumstances, but having a dedicated port is still more secure, since it means that it will only act as a keyboard if you expect it to do so. (This does not prevent the external device from providing undesired input if it is connected to the keyboard port, but it does prevent it from doing so if it is connected to a different port.)

3. I know that the original 1981 PC has ROM BASIC, and I think that newer computers ought to be designed to do such a thing too (although you could use Forth instead of BASIC if you prefer).

4. I meant an internal connection, not related to any of the existing ones; leaving the RS-232 free for connecting external devices that will use RS-232.


>1. The operating system cannot update the BIOS at all.

This would be so much more advanced than we have now.

Reverting to an approach proven so superior over more decades would not be a step backward by comparison to UEFI.

You really need to once again be able to reflash your motherboard using a clean image and have no possibility of any malware remaining on-board after that if things are going to be as advanced as it once was also.

For decades I thought it was always going to be normal for a quick reflash of the bios to give complete confidence and trusted validation that you could then rapidly rebuild a verifiably clean system from scratch using clean sources every time.

Progress can surely occur without advancement :/




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: