Hacker News new | past | comments | ask | show | jobs | submit login
Secure Boot is broken on 200 models from 5 big device makers (arstechnica.com)
177 points by verifex 56 days ago | hide | past | favorite | 141 comments



I’ve had so many issues with secure boot on my machines causing issues that if I ever saw a secure boot error message I would never think “oh I must have a rootkit”

Instead I would assume, in order

- my config broke it

- OS update broke it

- the bios doesn’t properly handle any case that isn’t “preinstalled OEM windows”

I had a laptop that as far as I could tell, could only boot into windows’ default bootmgr.efi. I could turn off secure boot, and tamper with that efi to boot Linux, but it refused to acknowledge other boot loaders from within the bios. It wouldn’t surprise me in the slightest if secure boot isn’t properly handled. I’ve had too many issues with cheap computers having janky bioses.


I feel many security researchers like to overemphasize the importance of certain security practices (the most common one being "longer and random password with symbols and upper case letters") without considering its costs, trouble, and human's lazy nature. Forcing long passwords causes people to use repetitive or easy to remember words, enforcing Secure Boot doesn't work if it gets in the way of normal boots. Making sure that these security mechanisms "just work" is as important as enforcing rules like these.

A natural question is whether Secure Boot is the right place to protect against the type of attack mentioned in the post. Given that we've already invested a lot of effort in fixing kernel privilege escalations, and any program able to install BIOS rootkits can access all data and modify any program anyway, what justifies the extra complexity of Secure Boot (which includes all the extra design necessary to make it secure, such as OS'es robust to tampering even with kernel privileges)? I mean, why invest so much in Secure Boot when you could harden your kernel to prevent tampering BIOS in the first place?


Real security researchers know that requiring symbols and upper case letters actually reduces security. Those requirements are explicitly rejected by the latest NIST recommendations:

https://pages.nist.gov/800-63-3/sp800-63b.html

So I'm basically agreeing with you, that a lot of people "in security" are just cargo culting.


For me it cannot be justified. A corporate environment might be different though.

Still, as a consumer I reject it for personal use because I believe boot malware is rare since other forms of attack have been vastly more effective and I also don't have an evil maid.

I just hope we don't get to a ridiculous situation where my shitty bank gets panic if I root my phone and wants to extend that behavior to PCs. "Trusted computing" is a failure in my opinion and "security" on mobile devices is an example where it significantly impacts the usefulness of the devices themselves. Of course this might be more driven by ambitions to lock down phones than real security, but still.

Secure boot might be useful for devices you administer remotely. But any secure boot validation doesn't mean anything to me, the system could be infected without secure boot noticing anything. It probably only gets in the way of OS installations.


The idea is to stop you getting rootkits that can never be removed. You want to feel safe knowing you can just wipe your computer and start again.


You can usually flash BIOS while wiping your computer in the same way that a malware does except in very rare cases. Also Secure Boot doesn't remove the kind of rootkit that doesn't get removed along with the storage since it has to boot from your hard drive anyway.


Was this laptop in question from Hewlett Packard (HP)? Because I swear I've seen this exact behaviour on a HP laptop.


> To this day, key players in security—among them Microsoft and the US National Security Agency—regard Secure Boot as an important, if not essential, foundation of trust in securing devices in some of the most critical environments, including in industrial control and enterprise networks.

Am I correct that Secure Boot purely exists to prevent this attack vector: malware gets root on the OS, hardware allows updating firmware via OS now owned by malware, but Secure Boot means you have to wipe only the hard drive instead of the firmware to eliminate the malware.

It seems like it would be a lot simpler and more reliable to add a button to motherboards that resets the firmware to the factory version (on memory that can't be written by a malicious OS).


Also things around physical access: if you steal my laptop, FDE prevents you from getting my data immediately but if you install malware which takes over the boot process, you get that data as soon as I type in my password.

If the process changes so the hardware only loads signed firmware, which only loads a signed boot loader, which only loads a signed kernel, etc. that avenue of attack is closed. It also makes it possible to trust a used computer.

The problem is that other than Apple nobody has really been committed to doing it well - it’s begrudging lowest-bidder compliance and clearly not something many vendors are taking pride in.


Secure Boot with factory keys has never prevented this attack, by design. You can take a valid, signed OS image from your favorite vendor (Microsoft, Red Hat, whatever), write some userspace code for it that asks for a passphrase and looks exactly like the legitimate paraphrase prompt, and configure the boot order to boot to it. It will pass the Secure Boot checks because it is completely valid. Secure Boot, as configured by default, never had userspace verification as a design goal.

There are at least two solutions:

1. Deploy your own Secure Boot keys and protect them with a firmware password whatever mechanism your particular system has to lock down Secure Boot settings.

2. Use TPM-based security so that even knowing the passphrase doesn’t unlock FDE unless the PCRs are correct.

#1 is a bit of a pain. #2 is a huge pain because getting PCR rules right is somewhere between miserable and impossible, especially if you don’t want to accidentally lock yourself out when you update firmware or your OS image.

Of course, people break PCR-based security on a somewhat regular basis, so maybe you want #1 and #2.


#2 is also something that a security expert needs to audit, so that booting an extracted stock recovery ISO (which has the kernel signed by the same keys as the real system) does NOT unlock the FDE.

https://discussion.fedoraproject.org/t/issue-with-automatic-...


Right. But this gives a really nasty dilemma:

First, you need a recovery image to be rejected by the TPM rules.

Second, you need an updated image that you prepare yourself, or that the distro prepares, etc, that will respect your security goals (e.g. does not allow you to boot it and copy files off) to be accepted.

Maybe a mainstream distro could distribute a UKI that will unlock a disk and run that disk’s userspace with no safe mode, recovery mode, etc without a password, but I’ve never seen such a thing.


> other than Apple nobody has really been committed to doing it well

I believe Chromebooks also do this fairly well.


Good point. I was thinking “PC” as opposed to phones but those totally count.


Chromebooks aren't phones.


Yes, hence my agreement with sillywalk. In my original comment I was thinking of the category of PC-like things which are more geared towards work and phones which are more limited.

Phones don’t have as much contrast because there are several vendors approaching Apple’s level of security, whereas on the classic PC side it’s just a mess. ChromeOS is an excellent addition to the comparison since it’s more locked down than a PC but still productive for many workers and really shows that the problem is coordination. Google cares about security and their ChromeOS devices are more secure than most PCs despite having a lot in common because they don’t leave it to the whims of the hardware vendor.


Big phone.


> It also makes it possible to trust a used computer.

Thankfully all this complexity is not the only thing that allows to trust a used computer. There are other options, like not having a modifiable SW (that is SW not stored in non-replaceable ROM) run prior to handing off control to bootloader loaded from external media.


> Also things around physical access: if you steal my laptop, FDE prevents you from getting my data immediately but if you install malware which takes over the boot process, you get that data as soon as I type in my password.

There's still simple vector of attack by installing hardware keylogger to the keyboard wires.


Do enterprise vendors like dell do it well enough to meet corporate requirements?


do folks in the business really simply steal a laptop and try to pull all data? or do they steal the laptop and wipe it and flip it... if they wanted your data wouldnt they steal you, the human, too ?

the signing method only offers buying more time before the innevitable data is "breached" by a theat actor - its the same buying-time for any and all encryption. the system can get too complex, and the underlying problems of humans will always exist (and amplified by more points of failure).. (accidents, data breaches, exploits, ect). the system needs to be immutable, but also mutable at the same time (for updates, ect) - and thats not exactly something easy to accomplish.

and with apple.. they try yes, but it is forever a walled garden. we've already seen their secure enclave bloatloader shinanigans get exploited on phones- and it was not fun for those people where their phones were compromised. apple suffer from us humans, too (we will never be perfect, nor will our software)


> do folks in the business really simply steal a laptop and try to pull all data? or do they steal the laptop and wipe it and flip it... if they wanted your data wouldnt they steal you, the human, too ?

Governments definitely worry about it, and I’d be shocked if e.g. banks didn’t also put it into requirements. Access can be temporary, too: imagine if you get 15 minute alone in someone’s office or they have a kiosk in the lobby, etc. – not enough time to open the case up but plenty to toss a USB drive in and reboot. Repeat for lost devices or scenarios like the KnowBe4 attack disclosed yesterday where some dude might not be able to explain cracking the case open.

> the signing method only offers buying more time before the innevitable data is "breached" by a theat actor - its the same buying-time for any and all encryption.

You have to think about cost, too. It appears to be safe to buy a used Mac because Apple employs competent cryptographic engineers and very few targets are worth involving a lab with truly serious hardware. This could be the case on the PC side too, but it’s undercut by vendors skimping on execution and until Secure Boot is pervasive and robust, nobody can easily tell whether hardware they’ve lost control of can be trusted. People have been getting malware on used computers for years and a trusted boot process makes it easier both to tell if that’s happened and to be confident that you’ve fully wiped a system.


i only chose those questions as to pick on the concept of "stealing a laptop" - its more the hypothecial use case where majority of users, given the "my laptop got stolen" will never see their system again. folks in the business of stealing a laptop will resell it if they can - a laptop in a random car in SF.. sounds real profitiable to try to decrypt some aes 2tb data for a cat pic); secure boot has not guarnteed a password to access the bios in my experiences - and not all bios are created equal. just makes it harder for data on the drive to be accessed (and certainly prevents my neighbor from putting a rootkit in my bootloader)

of course govts worry about data loss - and implanted root-kits; yes we want to prevent those but my point is there are many steps along the path where the complexity can get out hand, and every added step to a system is another step of potential failure - and anything we invent will be vurnerable to human mistakes/errors/ect (like we've literally seen). the problem is the firmware is mutable, the os is mutable, ect ect. the signed stages are a bandaide (not that im smart enough to solve the problem) and it's a matter of time before something like a cert leak happens (again). its funny too because we worry about 1000's of folks computers having a rootkit (that needs physical access when things like my-pc-looks-tampered-with are not considered), and then we let location data be gathered by literally every company, hmmm

the scenario where 15 min alone in somebodys office, (this made me laugh actually - theres a countless amount of what-ifs): a company with any kind of compliance should never let an untrusted person be alone (especially with access to a computer); a smaller company, surly we'd assume would be less of a target, but not a guarntee - but thats also why all companies should not leave their vaults with raw cash open for any to access.

as far as used systems going; folks will always fall victim for that which they do not know. for a newly owned computer a user should be fresh installing the firmware and OS. but convience has folks trained to plug-and-play with 0 downtime, 0 setup, 0 knowledge of options. apple, of course, that cannot be done on the same level as my non-apple system is done. and from what i remember, apple folks need to have proof of reciept for a used-sale, and even then can still get trolled on a used-sale with the find-my-mac lockout - maybe its improved nowaday; i'll simply pass and rather buy new (not that im supporting apple)


> a company with any kind of compliance should never let an untrusted person be alone ..

Trust could be misplaced.


<< if they wanted your data wouldnt they steal you, the human, too ?

As chilling as it may be to explore this line of thought, I think there are real, pragmatic considerations that make 'stealing' humans along with laptops less than ideal. Laptops get damaged, lost and so on all the time. Missing laptop raises some, but minimal suspicion and attention. Now, with a human missing, whoever did the deed, will likely have a difficult time moving around assuming LEOs in the area are competent.


there's a literal market to clone device data. you don't even have to steal them.

in the 90s Israeli celebright made millions of govt procurement for ...i forgot the acronyms. but basically devices where you plug a phone and it copies all contacts and messages.


The case you're outlining (an uefi rootkit) is pretty much the worst case; assuming you get infected by some malware which decides to install a malicious firmware (BIOS update), then pretty much nothing is getting in the way of that.

What secureboot is designed to prevent is malicious changes to the OS bootloader (a conventional rootkit), which is usually shimx64.efi or grubx64.efi on linux/dualboot machines, or bootmgfw.efi on windows. Secureboot checks the signature of .efi files before they're allowed to run during boot, ensuring they were signed by one of the trusted keys. And unless you've made changes to your secureboot config, that means microsoft and/or the hardware vendor.


I think “UEFI rootkit” usually refers to a malicious .efi file installed in the ESP. An actual firmware rootkit, installed on the flash chip, can likely bypass Secure Boot entirely, and may well be able to bypass TPM protections as well.


It is possible to use Secure Boot as part of a fully verified bootchain. The firmware verified the bootloader. The bootloader verifies the kernel (and kernel arguments, and ramdisk...), the kernel verified all executables. Userspace programs verify critical data files.

There are systems out there that do this, and having something like Secure Boot is essential to their design (as is measured boot, which is the main mechanism TPMs leverage).

However, this solution is utterly unworkable for the personal computer market. Instead, we have a bunch of general purpose kernels signed to run on any computer, but which are willing to run any userspace you through at them.


I'm having strange nostalgic flashbacks the '90s where I kept wondering why nobody offered a hard drive with a physical read-only toggle button. (Mounted to the front of the 5.25 inch bay in a tower chassis, as was the style of the time.)

Obviously you need some read+write storage elsewhere on the same computer, but you could reliably freeze large chunks of stuff in a way that would be impervious to viruses or hackers.


I remember USB drives in the '00s that had a read-only toggle. They were useful for rescuing machines that had a virus.

Edit: A quick search reveals that, of course, you can still buy them today. I have not felt a need for one in ages.


I strongly suspect that most or all of the modern "hardware" write-protect switches are actually just suggestions to the drive firmware. Which may very well itself be modifiable.


I can't imagine how it would be possible to do it any other way for a flash storage device.

A mechanical hard drive could at least theoretically have a physical lock attached to the drive head which prevents it from approaching the platters if it is engaged.


Flash storage requires high voltage to do an erase (which needs to precede a write operation).

Back in the EPROM days, that was easy, just don’t supply 25V or whatever.

Modern flash still needs those high voltages but generates it on-chip via charge pumps. If your read-only switch physically disconnected the charge pumps, you would have read-only flash.


Writing flash takes (relatively) high voltage and the voltage boosting circuitry could be routed through a switch. It generally isn't, and the voltage converter is often an on-chip charge pump so this wouldn't be an easy retrofit, but the current state of affairs is due entirely to lack of interest rather than lack of possibility.


Do those charge pumps use external capacitors? If so, you could disconnect/float those capacitors, or if that would damage or glitch the chip, you could replace the real capacitors with some circuit like a voltage regulator + diode that would be designed to provide the charge pump output rail with a voltage that's high enough not to glitch the chip but low enough to be unsuccessful in writing to the flash. Would one of those ideas work, and allow the retrofit you envision to be designed with existing flash memory silicon + a few additional components?


I believe they’re on-die capacitors for the charge pumps these days. The external ones are just for smoothening.

Probably a method to retrofit write/erase protection would be to just do power analysis and cap the current the flash chip can receive. Or shut it down if that works for you.

Not sure if they’re intelligent enough to run their charge pumps slower under compromised electrical conditions. Or if they’ll go haywire if they can’t do idle-time wear levelling/block erases.


Isn’t it really only the erase that requires high voltage? So with a blank flash chip, without higher voltages, you could write to it to your heart’s content until you need to free up deleted content?

Edit: I think I’m wrong here and high voltage is needed for both.


For years, Dell laptops came with a USB key containing drivers, a sort of "rescue boot disk". I once tried using them as a normal pen drive, but then realized they were read-only.

If there is a way to make them writable via software, that would be very interesting (and dangerous).


Erm. The read head and the write head in a magnetic drive are the same head. You can't keep the head away from the surface if you want to read the disk. But you can disable power to the driver that puts write current into the head.

... and you could absolutely build similar functionality into a flash chip. But most likely you can't actually buy such chips, at least not with any real capacity.


There are multi-actuator hard drives out there. I don't know if any of them separate read heads from write heads, but it would certainly seem possible for such a drive to exist.


This is the way a smart person looks at write-protect switches.


Hmmm, I would absolutely buy one of those if it also had a hardened case and a firm connection point for my real-world keychain. The use-case is a "my house burned down what next" backup, password-manager stuff and other details I might need before/without accessing any cloud-backup services.

I may need to read some of its files on a not-very trusted device, and I don't want to risk that device also tampering/trojan'ing other files, like backup copies of the software needed to decrypt the data files.

A simpler scenario might be a USB stick that I use for carrying files to be printed at the local library.


I have one of these as a boot disk (Medicat) for this exact reason.

Also because some of the software included in Medicat is flagged by some anti-virus software and I don't want them removed.


There were things like this, but it was more to prevent accidental writes. Some of the old 10" drives had a write enable toggle.


Secure Boot is the first component in a verified boot chain from initial power-on to application level code. Signed, verified firmware boots signed, verified kernel with strict authenticity and integrity guarantees. The goal is, presumably, to attest to the authenticity and integrity of everything the system runs, but when it comes to kernel modules and device drivers, userland OS components, and applications, those are the kernel's responsibility. But Secure Boot is an essential link in this chain.


That sounds correct, but even the savviest of users might not be aware they have malware installed when they decide to re-install windows. If cleaning malware requires pressing a button on the MOBO then I can imagine only a single-digit percentage of users will actually click it.


If they're not worried about malware, and there is some, then they'd probably get reinfected by their data anyway.


Immediately gets slapped over the head by the requirement: "preventing downgrade to a vulnerable version" (which would be just a matter of enough time passing)


And by "vulnerable version" they mean the version before they added ads to the boot screen.


it protects against boot and early boot attacks. thia includes bootkits but also early drivers such as AV drivers and others which protect the system further. if you dont have it, any security can be compromised before its active. via different methods.


How do you determine when to push the button?


Instead, write-protect the firmware by default, and require the user to press a physical button on the back of the PC to write-enable it (for a limited duration/until the next reboot)


Any time you're reinstalling the OS and suspect the old OS had malware.

Or if you want to make it simpler, any time you're reinstalling the OS.


Once a day ought to do…


It sounds like the biggest contributory problems here are:

1. Allowing unattended/automatic BIOS updates from a running OS at all

2. Being so paranoid about attacks by a spy with physical access to the computer that the keys cannot be replaced or revoked

I'm not a security researcher, but to just shoot the breeze a bit, imagine:

1. The OS can only enqueue data for a proposed BIOS update, actually applying it requires probable-human intervention. For example, reboot into the currently-trusted BIOS, and wait for the user to type some random text shown on the screen to confirm. That loop prevents auto-typing by a malicious USB stick pretending to be a keyboard, etc.

2. Allow physical access to change crypto keys etc, but instead focus on making it easy to audit and detect when it has happened. For example, if you are worried Russian agents will intercept a laptop being repaired and deep-rootkit it, press a motherboard button and record a values from a little LED display, values that are guaranteed to change if someone alters the key set and/or puts on a new signed BIOS. If you're worried, they'll simply replace the chipwork itself, then you'd need a way to issue a challenge and see a signed verifiable response.


Platform keys can be replaced given physical access to the computer. In fact they can generally be replaced by regular UEFI updates.

The problem here is in trusting, nay expecting, your average motherboard maker to either know anything about key management or give a shit about key management.


Not any less reasonable than expecting a mechanic to be competent and knowledgeable with brakes.


... except that in my experience most mechanics are competent with brakes, and most motherboard makers are not competent with cryptography, or indeed with anything having to do with software.

The auto repair industry has certain standards, and the computer industry... doesn't. In fact, the computer industry does everything it can to insulate itself from any kind of responsibility.


> most mechanics are competent with brakes

Because if they aren't and something happens the mechanic is the one who ends up rotting in a cell. Put the same penalties in place for ODMs and OEMs, mandating that machine owners absolutely always can change the locks to their own property, and mysteriously every single problem we have ever seen with secure boot is no longer some obscure inevitable unavoidable technology issue.


Exactly why I do my own brakes.

And why I want to control my own keys.


Luckily, you can ignore the factory keys and load your own. This issue affects the default configuration, from what I can tell loading in your own PK will override the built-in ones.


I was thinking about this too, thinking about the TPM 2.0 configuration of some machines. However, the keys used by TPM are not the "platform key".

> from what I can tell loading in your own PK will override the built-in ones

How can one go about doing this? If you have any resources that can show how, please share them. The public key of the "platform key" is "fused" into the hardware, is it not?


> And why I want to control my own keys.

Such as the keys to one's own house.


> Allow physical access to change crypto keys etc, but instead focus on making it easy to audit and detect when it has happened.

Shooting the breeze as well...

Have some (non-modifiable, non-updatable) portion of the firmware that, on boot, calculates a checksum or hash of the important bits at the beginning of the chain of trust (efi vars, bios).

Then have it generate some sort of visualization of the hash (thinking something like gravatar/robohash) and draw it in the corner of the screen. Would need some way to prevent anything else from drawing that section of the screen until you're past that stage of boot.

That way every time you boot your computer you're gonna see, say, a smiling blue kitten with a red bow on its head. Until someone changes your platform key / key exchanges or installs a modified bios, and now suddenly you turn the computer on and it's a pink kitten with gray polka dots.

That way you don't have to actively _try_ and check the validity. It'd be very obvious and noticeable when something was different.


I think the weakness comes if someone can predict or infer what the current display is, and then craft a malicious update that generates something visually similar enough to pass unnoticed.

Perhaps the kitten's bow is pink, instead of red, etc. Even a little bit of wiggle room makes the attacker's job a lot easier, much like the difference between creating something that resolves to a known SHA256 hash versus something which matches a majority but not all of the bits.

A simpler approach would be for the small piece of trusted code to discard and replace the hash/representation With a completely new sufficiently-different one whenever anything changes.


This fails to consider the possibility that the display hardware will be tampered with. It also does not consider if a copy of the picture is made and is then displayed by a separate program that pretends that the booting is slower than it actually is.

> Would need some way to prevent anything else from drawing that section of the screen until you're past that stage of boot.

It might need to prevent drawing anything on the entire screen. Otherwise a program might be able to modify the resolution, refresh rate, etc, to try to hide the picture or to display a different one.


I think that this is part of the way to do it, but not all of it. I might consider:

0. All of the BIOS code and other hardware code should be FOSS. This should be printed in the manual as well. A simple assembly language might be preferable, and if the hex codes are also printed next to it, they can also be entered manually if necessary.

1. The operating system cannot update the BIOS at all. To do so requires to set a physical switch inside of the computer which disables the write protection of the BIOS memory, and also disallows the operating system from automatically starting.

2. Require keyboards, etc to be connected to dedicated ports, not to arbitrary USB ports. (This is possible with USB but is a bit difficult; PS/2 would be better.)

3. You can program it manually (whether or not the BIOS memory is write protected) without starting the operating system (this makes the computer useful even if no operating system is installed); perhaps with an implementation of Forth. When BIOS memory is write enabled, then such a program may be used to copy data from the hard drive to the BIOS memory.

4. Like you mention, it should make it easy to audit and detect when keys have been changed. An included display might normally display other stuff (e.g. boot state, temperature measurement, etc), but a switch can be used to display a cryptographic hash. If you always fill all of the memory (even if part of it would not otherwise be used) then it can be difficult to tamper with in the case of an unknown vulnerability.

5. I had seen suggestion to add glitter and take a picture of it, to detect physical tampering. This can help in order to avoid alterations of the verifications themself. If it is desirable, you can have multiple compartments which can be sealed separately, each one with the glitter. If some of these compartments are internal, a transparent case around some of them might help in some ways (as well as to detect other problems with the computer that are not related to security).

However, even the above stuff will need to be done correctly to avoid some problems, since you will have to consider what is being tampered with. (You might also consider the use of power analysis to detect the addition of extra hardware, and the external power can then be isolated (and a surge protector added) to mitigate others attacking your system with power analysis and to sometimes mitigate problems with the power causing the computer to malfunction.)


0. Most of the UEFI is already open source. See TianoCore.

1. There are some things that may need to be updated from time to time that need to be applied before the OS is loaded - microcode updates being one of these. I would still like a physical write-enable switch.

2. Making a keyboard that is not a real keyboard is easier than ever with things like Arduino and Raspberry Pi, and it doesn't matter the interface. There is probably not a way to verify physical presence that can't be duplicated remotely. At some point humanity has to get beyond the primitive mentality of "this stuff on a computer monitor/from a speaker looks/sounds just like real stuff so it is the real stuff" and we have to accept that computers are machines and not in and of themselves a proxy for reality unless specifically considered so.

3. Funny, the original 1981 PC booted to ROM BASIC if it couldn't boot off of anything, so it was useful without an OS. I really wish UEFI firmware was on a replaceable SD card and the system would literally have no firmware if it was not present. I would pay the 2 cents more it would cost OEMs. With all the capability in modern chipsets I feel like this would be trivial to do.

4. Good idea. I wish computers had a separate display that is attached through some legacy interface like RS-232 and that doesn't go through VGA at all for this purpose, like a cheap LCD screen.

5. The old punched cards were very low density, but had one really nice property: you could physically see the data with nothing more than your eyes. It's funny that a stack of punched cards could potentially be more secure than millions of instructions of code hidden in a NAND or ROM that you cannot see or verify except with another device that you also have to trust and run on a platform you trust. Even then you can't really see the bits on a NAND or ROM without special expensive equipment. It'd be cool if there could be a high-density storage device where the binary contents are somehow physically viewable and discernable without a CPU needed. Something like QR codes but much, much more high density.


1. Yes. However, disallowing the operating system from automatically starting does not mean that the operating system cannot be started at all. If you deliberately want the operating system to add microcode updates like that, then you can perhaps type "AUTOBOOT" (or whatever the appropriate command is) at the Forth prompt that comes up when the write-enable switch is activated (or, if you don't like that, you can instead write the code to read the microcode updates from a disk, verify their cryptographic hash, and then apply them). FOSS microcode updates would also help with the security issues when doing so.

2. This is true, and can be useful in some circumstances, but having a dedicated port is still more secure, since it means that it will only act as a keyboard if you expect it to do so. (This does not prevent the external device from providing undesired input if it is connected to the keyboard port, but it does prevent it from doing so if it is connected to a different port.)

3. I know that the original 1981 PC has ROM BASIC, and I think that newer computers ought to be designed to do such a thing too (although you could use Forth instead of BASIC if you prefer).

4. I meant an internal connection, not related to any of the existing ones; leaving the RS-232 free for connecting external devices that will use RS-232.


>1. The operating system cannot update the BIOS at all.

This would be so much more advanced than we have now.

Reverting to an approach proven so superior over more decades would not be a step backward by comparison to UEFI.

You really need to once again be able to reflash your motherboard using a clean image and have no possibility of any malware remaining on-board after that if things are going to be as advanced as it once was also.

For decades I thought it was always going to be normal for a quick reflash of the bios to give complete confidence and trusted validation that you could then rapidly rebuild a verifiably clean system from scratch using clean sources every time.

Progress can surely occur without advancement :/


This big mistake though was back when all this was being enabled on PC's, the linux vendors out of fear that the rest of the industry would lock them out, standardized on shim and the MS certificates in the firmware. Thus requiring MS to sign the first stage of every linux install/boot rather than both doing that, as well as defaulting to an environment where the distros would boot in UEFI 'setup mode' enroll their own cert/key chains during the first provision/boot, and then permanently switch to user mode. Had they done that, this entire article would have been just about meaningless as all those test keys would have been replaced the moment the machine was installed.

So today a decade+ later there still isn't a standard way to automatically enroll a linux distribution's keys during initial install in any of the distributions (AFAIK).


that's out of date for at least 6yrs. most Linux distro already support many ways to generate your own keys and automatically sign your kernels and modules. and bioses have ways to enter "user mode" so you can upload your PK etc.

but still, since the attack for this to be worth is out of this world rare... very few orgs bother to even document it in the main guides because it gives zero protection and infinite support load


I don't think you are understanding my point.

The distro installer should, if it detects setup mode, automatically be asking the user if they wish to replace the all the existing keys and enroll distro supplied certs, keys and dbx entries. Except none of the distros have this infrastructure built, outside of their dependence on Microsoft.

And no, none of this is needed if all you want is to be able to self sign a kernel/etc because its possible to install a MOK key to shim, but that isn't the point, the point is that the vast majority of linux users aren't setup to protect a cert/key chain from an attacker. Which is the entire reason for secure boot. If your attacker is sophisticated enough they will be stealing the signing keys from your machine/org and signing their own updates. Which is why MOK and self signing is a mistake for ~100% of Linux users.


ah yes. good point.

it doesn't help that the team (guy?) doing all the systemd unification for those features now work for Microsoft anyway.


Arch has a pretty useful key enrollment tool that I'm sure exists on other distros too. Only command line, though. There's also tooling for enrolling a custom key database if your firmware doesn't accept the standard API by creating a bootable key management update tool with your preferred keys.

There's a guide for both approaches here: https://wiki.archlinux.org/title/Unified_Extensible_Firmware.... You'll need to make sure whatever distro you use has the right hooks to sign the boot images after each upgrade (i.e. an apt callback rather than a pacman callback) if you're not using Arch, of course.

Using the sbenroll tool, the process is three commands (generate keys, enroll keys, sign current bootloaders) plus whatever extra BIOS interfacing logic your computer needs on top of normal secure boot stuff like unlocking the BIOS through a password.


As I pointed out to the other respondent, I don't think people are understanding what i'm saying. I'm not suggesting that its not possible to manually enroll, or self sign (which should come with a giant warning that it basically invalidates much of the security if the signing keys aren't protected with something hopefully more complex than a keyboard entered password).

Basically the installers should be replacing the existing certs and keys, with distro supplied ones which are maintained along with global DBX entries by the distro itself, with a distro supplied KEK/etc where the private keys are stored in a high security environment not available to most users.

Its really the kind of project the linux foundation should be sponsoring so the infra could be shared cross distro.


It feels utterly absurd that devices typically have certain keys are baked in, cannot be removed. I believe there are still Microsoft keys on nearly every device?

It's unconscionable to tell users this is here to keep you safe, but that you have no control over it & if something goes wrong well then too bad, at best we might provide an update.

(Also that governments can probably force these root-of-trust companies to sign payloads to circumvent security is also pretty icky to me.)


As I understand it, that's both the whole point of, and limitation to, the hardware root of trust - it can't be changed even with a firmware update.

Of course, if the key used to sign the firmware is compromised, the root of trust is still technically what it is supposed to do - verifying signatures, it's just that that it becomes irrelevant in terms of security / integrity.


>As I understand it, that's both the whole point of, and limitation to, the hardware root of trust - it can't be changed even with a firmware update.

The OP states that the vendors could have revoked the compromised platform key with a firmware update. They just didn't bother.


They'd also need to know every user has upgraded the boot loader such that the system doesn't depend on those compromised keys!

That does make it quite difficult to pull off any kind of key rotation. I'm not sure, but I think (well known Secure Boot tool) sbctl is saying that you can sign a bootloader with multiple keys, which would permit creating a bootloader that would work with the compromised & the new root-of-trust, which at least opens some window of possibility. https://github.com/Foxboron/sbctl/blob/master/docs/sbctl.8.t...


This consumer (me) values security highly enough that he would prefer for the firmware update to render the machine unbootable (as long as it remains possible to render the machine bootable again by re-installing software).


It's like buying a house with locks that can not be changed.


Previously in 2023 Intel lost it's private UEFI key on the MSI hack. https://news.ycombinator.com/item?id=35843566

This time it's AMI. Cannot get bigger.


I'm not sure it's reasonable to just treat it as an AMI problem, given that AMI literally named the key "DO NOT TRUST - AMI Test PK". Obviously AMI was stupid to trust the OEMs to, you know, have a clue what they were doing and replace a wired-in test key in their production builds... but it's also true that, even if AMI should have known that the OEMs are idiots, the OEMs are still idiots.

I suppose you could also break it down and say that the particular idiot who hardwired a test key in an SDK or whatever should have known that both the rest of AMI and everybody at the OEMs would be idiots, and found a way to make it relatively hard for them to stay with that key. But however far you dig, it's idiots all the way down.


You are right, idiots all the way down. AMI should have created a PK generation script for those idiots. And you need such a script, because everything which can go wrong will go wrong. E.g they'll generate keys with 2044 bits, or such.


UEFI comes with SB, Microsoft introduces TPM, Google introduces WEI. They will will say it is for your benefit, and in part it is correct, but security can be achieved through many measures, and corporations will choose such solutions that will grant them more ability to control your device.

This leads to accumulation of "power", and monopolization of it in systems leads to vulnerability. One point of failure is enough to compromise entire ecosystem.

Just reminds me that apple checked every application you run, for "safety reasons" (rather checks app certificates, but that is nearly the same).


"It's EOL so no fix" is pretty shitty. It was defective for its entire unsupported life as well.


It's a bit of surprise that most of the things work most of the time, given on how shaky basis are they build.


If you dig deep enough all our land is on a floating basis.


> ... key players in security—among them Microsoft and the US National Security Agency ...

This phrase does not sit well with me.


NSA doesn't want the government and critical businesses to get hacked, unless they are doing it themselves. That's why they support many security features in both proprietary and open source world.

However, in the US, they don't need to hack; they can just ask for data lawfully.


The largest provider of security holes and the largest user of them.


Key player doesn't imply a positive contribution.. A lot of national, corporate, and private security hinges on Microsoft.

Also, people have played with Microsoft's keys, for sure.


Isn't that a vulnerability that still requires physical access to hardware?

Or is that just a protection against rootkits?

I still fail to understand what secure boot is protecting against: if a machine is compromised remotely, does secure boot prevents installing a rootkit that's invisible from virus scanners?


Secure Boot protects against booting a kernel image that's been tampered with. If the running kernel is vulnerable to root-level exploits, it can't help you, but it can prevent a malicious attacker from switching a trusted kernel for a malicious, modified copy.


> it can prevent ... switching a trusted kernel for a malicious, modified copy.

Or a free OS.


Being able to enroll your own keys (or disable secure boot entirely) is a requirement for being a compliant implementation.

So sign your own kernel with your own keys and enroll them to your UEFI and you have 0 problem with installing "a free OS" (it can also just be regular EFI binaries).


Making a non-microsoft product require manual key enrollment, while Microsoft products do not require such enrollment, sounds like abusing monopoly status to give new entrants a disadvantage. The 1990s antitrust people would have a field day with that one.

Also, by the way, I am an ex Microsoft employee, bet you wouldn't guess that from these comments.

I do personally consider secure boot and TPM to have been pushed in bad faith, not for serious security concerns.


> sounds like abusing monopoly status to give new entrants a disadvantage.

It is, and it isn't something I like, I'd prefer if no keys were enrolled by default.

> I am an ex Microsoft employee, bet you wouldn't guess that from these comments.

No I wouldn't have guessed, but MS is so big that just saying your were a MS employee could mean in any one of the thousands of departments not even remotely related to Windows. But that is neither here nor there as it doesn't change anything about my statement.

> I do personally consider secure boot and TPM to have been pushed in bad faith, not for serious security concerns.

Sure, but I still prefer to have this now that I can use it, even if its introduction was in bad faith (which it was consdering IIRC there were e-mail floating around talking about if they could get away with making it only work with Windows or maybe it was some other security mechanism).


> could mean in any one of the thousands of departments not even remotely related to Windows.

That's true. I was a dev in Windows though. I wasn't privy to any memorable internal discussions about secure boot.

Anyway, I'm just saying I'm not a kneejerk windows or ms hater, which I think I read as in discussions like this.


I also am not a blind MS hater or supporter. I probably do often give MS too much leeway for a lot of things where more skeptical people would basically instantly dismiss it. I guess I just try to make the best out of what is given me.

> I wasn't privy to any memorable internal discussions about secure boot.

I think it was a leaked email from Bill Gates around when UEFI or Secure Boot was becoming a thing. I wasn't able to find it after searching for a while though.


Back in the day the euphemism was "trustworthy computing". That may be a good thing to Google to find those emails. I remember then too. Another name was "palladium"


> Being able to enroll your own keys (or disable secure boot entirely) is a requirement for being a compliant implementation.

That may be true on x86, but on ARM, Microsoft specifically requires that you not be able to do either of those things:

> 13. On ARM platforms Secure Boot Custom Mode is not allowed. A physically present user cannot override Secure Boot authenticated variables (for example: PK, KEK, db, dbx).

> 18. Enable/Disable Secure Boot. On non-ARM systems, it is required to implement the ability to disable Secure Boot via firmware setup. A physically present user must be allowed to disable Secure Boot via firmware setup without possession of PKpriv. A Windows Server may also disable Secure Boot remotely using a strongly authenticated (preferably public-key based) out-of-band management connection, such as to a baseboard management controller or service processor. Programmatic disabling of Secure Boot either during Boot Services or after exiting EFI Boot Services MUST NOT be possible. Disabling Secure Boot must not be possible on ARM systems.


That is true, but I wasn't talking about those considering we are on a post about x86 MoBos (I guess I could have clarified that).

And until this requirement on ARM is changed (or there are options I can buy which allow it) I don't consider it a secure platform.


> Being able to enroll your own keys (or disable secure boot entirely) is a requirement for being a compliant implementation.

No, it's really not.

Currently Microsoft requires enrolling keys and disabling SB to be available to qualify for "Designed for Windows" branding on x86 PCs. No such requirement holds for ARM PCs, and Microsoft may remove this requirement at any time.


not to mention be safer than everyone using Intel or Microsoft free for all keys.



Seems apparent we need a Professional Engineering Certification process and our own disciplinary board similar to other engineering disciplines.

High time.


You could probably get away with just forbidding software license agreements that disclaim liability for any and every kind of negligent stupidity.


So forbid the GPL and BSD licenses?


To make that work in the US:

1: would need to onshore all work in the bootchain (software and hardware).

2: would put liability on inidividual engineers and take liabilty away from the corporate organisiation.

3: accountability requires that engineers have authority.

4: in a team environment accountability is going to be blamed on the weakest members

5: it would completely fuck any inidividual open source development.

If you want to come up with better ideas for accountability, then read up on the witchhunts that occur after deadly failures in other engineering disciplines, and check that your idea fixes the problems you see.

The world is far more interconnected now than when my grandad was a certified engineer.


I know never to trust software written by hardware folks, but seriously, how do you ship a key where the CN is literally "DO NOT TRUST -- AMI Test PK" as the root security. That is outright malicious incompetence.


Because nobody ever looked at the text of the certificate. It was probably a binary file checked into the source control system and, since it seemed to work, nobody ever looked at it.

Probably came as part of the dev kit from AMI.


The only winning move is not to play. AMI should never have distributed test certs to begin with. Give your customers instructions on how to generate self-signed certificates (assuming they are accepted) or setup a dev CA that will sign test certificates. Then the damage from a key leak is limited to one vendor.


A bit oftopic but when last week came out, Cellebrite could open the Trump shooters phone, there was a PDF, they can brute force all android phones since from Android 7 or newer. What changed there? Why can they brute force all newer versions?

https://www.documentcloud.org/documents/24833831-cellebrite-...


Android 7 is when Google switched from dead simple full-"disk" encryption to per-file hackery with complicated key management. Basically the whole data encryption scheme changed.


How do I check if my laptop is affected? Tried

[System.Text.Encoding]::ASCII.GetString((Get-SecureBootUEFIPK).bytes) -match "DO NOT TRUST|DO NOT SHIP"

, gives me cryptic errors and gpt of no help.


in a powershell um.. shell: [System.Text.Encoding]::ASCII.GetString((Get-SecureBootUEFI PK).bytes) -match "DO NOT TRUST|DO NOT SHIP"


tried that, it gives me cryptic errors and gpt is of no help.


I just wanted to check if I'm affected.

...

then remembered I'm using custom platform keys

tbh. I don't understand why secure boot is build around global root of trusts instead of ad-hoc per device trust (i.e. like custom platform keys but with better support), at most supported by some global PKI to make bootstraping on initial setup easier

this would not eliminate but massively reduce how much "private key" got leaked vulnerabilities can affect secure boot chains (also move most complexity from efi into a user changeable chain loaders, including e.g. net boot, etc.)

PS:

To be clear " I don't understand why" is rhetorical, I do understand why and find it a terrible bad idea.


Unless I'm missing something, Secure Boot as designed is fundamentally broken.

Its root of trust is the BIOS/Firmware, which can be updated from a running OS. There is no hardware root of trust.

How Secure Boot Works

Secure Boot ensures that a device boots using only software trusted by the Original Equipment Manufacturer (OEM). Here's a high-level overview:

1. Power On and Initialization: The CPU initializes and runs the BIOS/UEFI firmware, which prepares the system for booting.

2. Platform Key (PK) Verification: The firmware verifies the Platform Key (PK), which is used to validate Key Exchange Keys (KEKs).

3. Key Exchange Keys (KEK) Verification: The KEKs validate the allowed (whitelist) and disallowed (blacklist) signature databases.

4. Signature Database Verification: The firmware checks the allowed (db) and disallowed (dbx) signature databases for trusted software signatures.

5. Bootloader Verification: The firmware verifies the bootloader’s signature against the db. If trusted, the process continues.

6. Kernel and Driver Verification: The bootloader verifies the OS kernel and critical drivers’ signatures.

7. Operating System Boot: Once all components are verified, the OS loads.

Apple Secure Boot Process

Apple adds hardware-based security with the Secure Enclave:

1. Secure Enclave Initialization: Separate initialization handles cryptographic operations securely.

2. Root of Trust Establishment: Starts with Apple's immutable hardware Root CA.

3. Immutable Boot ROM Verification: The boot ROM verifies the Low-Level Bootloader (LLB).

4. LLB Verification: The LLB verifies iBoot, Apple's bootloader.

5. iBoot Verification: iBoot verifies the kernel and its extensions. The Secure Enclave ensures cryptographic operations remain protected even if the main processor is compromised.

For more details, check out:

- <https://uefi.org/sites/default/files/resources/UEFI_Spec_2_8...>

- <https://www.apple.com/business/docs/site/Security_Overview.p...>

I would really love to have a hardware root of trust on a Linux or other open system, with a hardware security module of sorts that is programmable, so I decide what the root keys are, and is able to measure the firmware boot process, establishing a proper audit trail or chain of trust.

I can't remember the HN formatting rules, so expect an edit shortly to make this look better.

Edit: I did a little more poking. It's not quite as bad as I thought, because at least in theory, the BIOS will verify a digital signature of a BIOS update before flashing it.


The firmware updates from a protected capsule so it can't be updated without a signature verification effectively closing the loop. Its possible to add a 3rd party root of trust (TPM/etc) to this, its just vendor defined whether a platform uses an additional component to validate the PK/firmware/etc earlier in the process.


> Secure Boot ensures that a device boots using only software trusted by the Original Equipment Manufacturer (OEM)

"We sold you this house with a front door designed where our key will always let us in". Why do we put up with this shit?


>In 2012, an industry-wide coalition of hardware and software makers adopted Secure Boot to protect against a long-looming security threat

This joke never gets stale, wait it is not a joke ?

I still believe the only reason for this to exist is to eventually turn general computing devices into a locked down Cell Phone Spying Device.


This has been my theory since Windows 11 required TPM. It's not to protect the consumer, it's to protect the IP-holder.

The PC is the lone outlier in the locked-down, walled-garden world of consoles, cell phones, tablets, smart TVs, EVs, etc. I think there's a concerted effort to change that.


Absolutely. Look at all of the changes to the media stack Microsoft made for Vista and none of them are to directly benefit the person who bought the OS license. If you have ever wondered how a 486 could play MP3s and still run X but your modern laptop gets hot and spins the fan when you are playing those same MP3s it is because the media companies demanded it.


If they're literally the same MP3s, it's because modern software sucks. You can still play them with mpg123 with an immeasurably low cpu load.


The pedant in me wants to point out that most 486s couldn't play MP3s (they just don't have the horsepower, an AM-586 or a DX4 maybe) and you'd need a Pentium. /pedant

OK now to my real point. Vista is actually a really good call out of MS being inconsistent about this. The major changes in Vista (Moving graphics drivers largely out of the kernel, simplifying what sound drivers could do) were all predicated on the fact that hardware vendors are notoriously bad at software. This cannot be understated just how bad they are, NTKernel was originally intended such that vendors would make their own HAL.. one tried and it was so bad MS just NOPE.jpg'd that and did it themselves. So for MS to double down on a system that relies on the same known to be horrible at software vendors is just hilarious to me.


A 486 DX4-100 could play mp3s in stereo (or in mono at 66mhz), but do absolutely nothing else at the same time. I used a DOS mp3 player (mpxplay) and it could be done.

Docs suggest stereo is possible at DX2-80mhz if you disabled screen output and heavy mp3 file pre-buffering.

Top level comment here claims the issue was the on-screen animations and they were able to build a highly optimized mp3 player on a 286 (dunno through what speaker): https://m.youtube.com/watch?v=b0zZpzxHSeM

Even on a later pentium, I had to minimize throttle priority on my web browser because smooth scrolling requires a ton of juice. Still does to this day looking at power consumption on an iPhone.


> Even on a later pentium

MMX helped a lot here, I remember my Pentium MMX 233 had no trouble playing games and playing music. To give you an idea of how crapy that machine was otherwise... it was a Packard Bell with an onboard ATI chip that barely qualified for 3D acceleration. The Pentium 166 (non-MMX) we had would chug on things that the MMX just didn't care about.

> I had to minimize throttle priority on my web browser because smooth scrolling requires a ton of juice. Still does to this day looking at power consumption on an iPhone.

This still to this day amuses me. Metal and DX12 both have calls designed to support this natively on the GPU by allowing the application to shift the rendered area of a very specific box (without rerendering the entire screen) and then render behind in the blank. As far as I know only Safari on iOS does this even close to properly and even then it has other iOS Safari related quirks around that that Apple refuses to fix.


Indeed, I share this outlook, as do others: https://boingboing.net/2012/01/10/lockdown.html


Better late than never. “The actual user of the PC — someone who can do anything they want — is the enemy.” (Intel, 1999)¹

¹ https://www.zdnet.com/article/the-biggest-security-threat-yo...


fear and planned obsolescence; "all these old things are bad... never mind it's only a day old. throw it away already, and buy the new one. no discounts!"


Secure Boot itself is fine, the problem is shipping ANY keys by default. I use Secure Boot myself with my own signed keys on my laptop and its nice knowning it can only run what I allow it to run (password protected UEFI ensures only EFI binaries or kernels I signed get booted and that ensure it mounts my encrypted partitions).

The problem is when these other keys are pre-shipped they invalidate the entire "ensures only [...] kernels I signed" part. And just removing the pre-shipped keys can cause other problems: https://github.com/Foxboron/sbctl/wiki/FAQ#option-rom


It's only a F12, tab, enter, down, tab, enter away to disable if you really don't like it that much.


Yes right now, what about 5 to 10 years from now ?

Or maybe for that option in the future, the device will cost thousands of USD more.

Or you need a special professional license to get a non-locked down device, and the license will cost more than a house in a rich suburb.


People have been asking "what about five years from now" for twenty years, so extrapolating from the current rate of change, I'd say things will be fine.


You can disable Secure Boot on x86 PCs, but nowhere else.


So how do people install openbsd on the thinkpad x13s?


Here's an article about the rule being made in the first place (IIRC, the rule got made at the same time Windows on ARM was itself first given to manufacturers): https://softwarefreedom.org/blog/2012/jan/12/microsoft-confi...

I'm not sure how it's working on those laptops, but I'd imagine the choices are either that Lenovo got given an exception, the rule as a whole got changed, or that Microsoft just hasn't noticed or is intentionally looking the other way.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: