Hacker News new | past | comments | ask | show | jobs | submit login
PixieFail: Nine Vulnerabilities UEFI Implementations (quarkslab.com)
152 points by weinzierl 11 months ago | hide | past | favorite | 98 comments



It is so disappointing that ARM and RISC-V is adopting UEFI, we had a chance of a clean break, but noo, let's make persistent firmware level root kits easy and cross platform with EFI bytecode. Yep, network stack with hard coded web addresses in your firmware, brilliant idea. /rant


UEFI does not mean you need a network stack. Your dislike is of a particular implementation of the UEFI specification. The part of the spec that is being used is the contract between the bootloader and operating system which gets loaded. There are a large number of benefits that come from standardizing this interface, and most of the drawbacks you perceive can be avoided depending on how vendors go about the implementation.


I'm personally (uneducatedly) mostly for it because I like the way things "just work" on PCs. No matter what components you put on, the thing will boot. A sibling mentions that this is due to ACPI - I guess then I want that too.

Needing a separate build for nearly every different SBC sucks and I'm happy about anything that will broaden compatibility here.


It's entirely because it's expected in PCs and not expected on SBCs. Devicetree exists and can solve this problem, but it means the SBC and ARM SoC vendors would need to upstream their drivers like x86 vendors do. Until you have that ACPI vs devicetree is irrelevant for the purposes of improving this situation.


I'm reasonably confident that firmware/boot and drivers are orthogonal; consider that there are PCs with normal UEFI boot that will happily boot any Linux image you want but it'll come up without drivers because nobody's written them, or on the other hand the Raspberry Pi has open source drivers[0] but the boot situation is still its own bizarro thing that's unlike any other machine so you can't boot the same USB stick on a Pi and anything else[1].

[0] Weirdly not all upstreamed last I looked, but mostly.

[1] I mean, you might be able to put a Pi-compatible bootloader and, say, a UEFI+ARM bootloader on the same stick, but I mean with a single boot path.


This is a narrow view. There are more operating systems out there than just Linux. Standardizing peripherals the ways PCs have done would have many benefits. Device tree basically forces operating systems to need per board images instead of a generic image that works on most boards. This doesn't scale well, especially on operating systems without a stable driver abi.


There's nothing about device tree which forces this. It's just how it's normally used because of said constraints. The ideal device tree situation would be it is provided by the SBC vendor and then passed into the generic image, just like ACPI. The reason this isn't done is this generic image doesn't exist at the moment. Switching to ACPI still means this generic image doesn't exist.


It's the reverse. DTs allow having a common image, where just the DT differs (and can be selected on boot, or passed from FW if it's not part of the OS image).


> Standardizing peripherals the ways PCs have done would have many benefits.

So, Linux's implementation would work well, whilst Windows has to pretend to be Linux in order to have it work mostly kinda unless the device explicitly supported Windows?


UEFI is a necessary evil, we just have to get better, open-source implementations. Device Trees and alternatives are absolutely horrid especially in comparison.


I'm not a huge fan of DT, but I've also reversed an UEFI bios once and I absolutely detest that hot pile of garbage. Obvious insane MS influence, with e.g. too many GUID everywhere for no reason. Overall astronaut architecture with gratuitous phases. Humongous abstraction layers that are not even optimized out at compile time and replaces a single "out" instruction by object oriented crap with virtual layers probably involving GUID. Scope insanely large. Insane vulns over and over again with some that completely break the PC security model for years, AND hinder maintainability at the same time. Berk. That's one of the reason that makes me want to use Macs everywhere (well maybe it is just because I don't know enough how their bootloader is designed and it is actually worse)


M1 Macs don’t have UEFI, just a very limited system that can boot macOS from the SSD. The boot menu is macOS. The initial firmware has no drivers beyond the SSD. That’s why there is such a long delay between power on and the screen displaying the logo. And why there are no keyboard shortcuts. And why it doesn’t support external boot discs.

https://news.ycombinator.com/item?id=26114417

https://support.apple.com/guide/security/boot-process-secac7...


Device trees aren't an alternative to UEFI, but to ACPI.


True, I skipped a step in my comment. My assumption is that ACPI and UEFI/BIOS usually exist together. Without UEFI you have to find an alternative to both, and neither DTs and the (bootloader) alternatives are pleasant.


ACPI table issues are just as unpleasant as devicetrees if not moreso.

What makes x86 feel nicer is that most of the peripherals fall into two categories: 1) Standardized for the arch, so their configuration can be assumed and hardcoded, or 2) Attached to a bus that can be dynamically enumerated (i.e. USB, PCI, etc.) This generally makes the per-system configuration relatively simple.

Embedded systems are not like this. Each SoC is effectively its own system architecture that just happens to share a CPU instruction set with other systems, and the devicetree ends up being the description of this architecture. So it looks gross and ugly and complex by comparison, but the complexity of defining peripherals for a system has to exist somewhere!

The messy bits are, of course, that both the OS and bootloader need access to subsets of this data, and you'd like to ship the tables with the bootloader for a SoC (so the OS can be gneeric and not care), but really, the bootloader only needs a little bit of this, and the OS needs almost all of it... and this leads to coupling between the two being tighter than it should be.


What's so wrong with u-boot? Most of the trouble people have with it is "thanks" to chipset vendors who forked their version somewhere 10-ish years ago and completely mutilated it instead of putting in the effort to get their stuff upstreamed.

The only major thing it can't boot to my knowledge is Windows, but it might be doable if you chain-load Grub.


uboot needs to be modified to run a given operating system if it wasn't already supported (or a shim pretending to be one of the supported operating systems needs to be used). UEFI provides a way to support all operating systems through a standardized interface. uboot implementing UEFI interface provides the best of both worlds.


"implement just enough UEFI to boot stuff, not the whole kitchen sink" I believe is the target of the uboot project


Coreboot does ACPI just fine.


> Device Trees [...] are absolutely horrid especially in comparison.

Care to elaborate? I only have a passing interest in the ACPI / UEFI / Open Firmware / device trees / etc. story, but I haven’t encountered anything particularly awful about device trees, or (until you) anybody saying there was.


Device trees are what you get if you don't implement ACPI.

While there are alternatives, you generally seem to get "device trees and a barebones bootloader" on ARM and "UEFI + ACPI" on amd64.

ACPI will list hardware and necessary hardware properties based on some basic API calls to the system interface. UEFI initialises the ACPI data structure and exposes it to the bootloader so the appropriate drivers can be loaded and configured.

With device trees, you basically configure and build the drivers and configuration into the kernel/OS you're trying to load. That's why compiling Linux on amd64 is generally easy and produces a single image, while for many other devices (smartphones, some SBCs) you need to compile a kernel per device. The device trees only need to be imported/written once per device (or device type, depending on how nice the manufacturers are), but that's how you get stuff like this: https://github.com/torvalds/linux/tree/master/arch/arm64/boo...

On ARM there are actually a few devices that implement UEFI, but most of them have Secure Boot locked in and configured to only boot Windows.

ACPI is not perfect and it's not technically required to have UEFI to implement something better than device trees, but I'm not sure if reinventing the wheel here is necessary or even preferable. UEFI already has open source implementations ready to go, with kernels and other tools already containing code to interact with those APIs, whereas a custom ACPI replacement protocol would need more implementation work,


Devicetree is in effect doing the same thing as ACPI tables. It actually usually is passed into the kernel by the bootloader. The main thing that prevents having one vanilla kernel that works on most ARM SBCs (where the vendor only provides a devicetree like they provide ACPI tables) is the fact that most of them don't have full upstream support so you need to run a custom kernel anyway. (Secondly, because it tends to be used primarily on systems where you are building your own patched kernel anyway, most devicetrees are not so much a generic description of a device as initialisation data for a specific driver)


Lack of peripheral standardization is a big piece too. You're not going to include every possible arm peripheral driver in your image. There are far fewer drivers necessary to include on PC by comparison (and they are often less space constrained anyways so including those drivers isn't a big deal).


It's not much different. PC linux distros already bundle basically all the device drivers for not just the PC box itself but all the peripherals you might plug into it. The ARM ecosystem if anything has fewer peripherals, it's just not upstream.


There is also the fact that ACPI allows embedding executable code, which in effect allows system vendors to hide proprietary drivers on it. E.g. HID stuff such as brightness control or keys which do not (generally) need any kernel code to work and are generally impossible to define with DT unless they are simple single-purpose GPIO pins.


ACPI is executable code.

"Much of the firmware ACPI functionality is provided in bytecode of ACPI Machine Language (AML), a Turing-complete, domain-specific low-level language, stored in the ACPI tables.[7] To make use of the ACPI tables, the operating system must have an interpreter for the AML bytecode."[1]

And it's not like this was intended to truly be fully interoperable. "Maybe we could define the APIs so that they work well with NT and not the others even if they are open."[2]

[1] https://en.wikipedia.org/wiki/ACPI

[2] https://issuepedia.org/1999/01/24/ACPI_extensions


This is semantics at this point, but ACPI "is" not executable code. It contains executable code in form of methods, but it is a structured container for many types of data (they are called "tables" for a reason). You could very well embed DT in ACPI and there are platforms out there which do that.


Device Tree can be compiled to a separate file, then loaded by boot loader and kernel. When device tree is available and all drivers are present for a device, then it's easy to boot it.


I think device tree, as opposed to something that can enumerate whatever hardware is present, is part of the reason that you have separate ROMs for every single Android phone. (Whereas a single Linux ISO can run on just about any PC out there.)


That's not the reason. Have you seen the size of the average kernel that can boot on just about any PC out there? It's 10MB kernel code and easily 100MB initramfs, depending on how many video card firmwares you intend to support. And that's on a relatively "standardized" PC platform, with generic ACPI/PCIe/SATA device abstractions in the hardware.

ARM has none of that: no standardized platform, no device discovery through bus enumeration, no standard VESA graphics implementation. Device tree is a consequence of the non-standardization of ARM devices, not the cause. You could easily have the devicetree in the phone's ROM (just like ACPI, but without the executable code vector) and have the kernel read that data on startup. But you would need a kernel of at least the same size as the PC kernel+initramfs; and unlike on PC's, on phones people still (somewhat) care about code size and performance.


It's definitely only part of the reason. If it were the only reason, it would be easy enough for a 'unified' ROM to contain config/devicetrees for a few thousand models of phone, and auto-select which one to use at boot time depending, for example, on a commandline flag.


The DT is actually there to avoid recompiling the kernel for each specific device. But probably tons of vendor do not understand the purpose.


On RISC-V, UEFI (typically edk2) is a payload for SBI (typically opensbi).

If you hate edk2, you can run something else e.g. u-boot, the linux kernel or your own code.

If you hate opensbi, look at oreboot, which implements something equivalent to u-boot SPL + opensbi in rust.


Hardware root kits make me lose sleep at night - has there been evidence outside of academia that they’re used?


Yes, for a long time. Criminals have used hardware rootkits since at least 2008, but the most famous one is Intel Active Management Technology.

Wikipedia explains:

> Intel Active Management Technology, part of Intel vPro, implements out-of-band management, giving administrators remote administration, remote management, and remote control of PCs with no involvement of the host processor or BIOS, even when the system is powered off. Remote administration includes remote power-up and power-down, remote reset, redirected boot, console redirection, pre-boot access to BIOS settings, programmable filtering for inbound and outbound network traffic, agent presence checking, out-of-band policy-based alerting, access to system information, such as hardware asset information, persistent event logs, and other information that is stored in dedicated memory (not on the hard drive) where it is accessible even if the OS is down or the PC is powered off. Some of these functions require the deepest level of rootkit, a second non-removable spy computer built around the main computer. Sandy Bridge and future chipsets have "the ability to remotely kill and restore a lost or stolen PC via 3G". Hardware rootkits built into the chipset can help recover stolen computers, remove data, or render them useless, but they also present privacy and security concerns of undetectable spying and redirection by management or hackers who might gain control.

https://en.wikipedia.org/wiki/Rootkit#Firmware_and_hardware


Not sure that if you pay extra for vPro on your computer and it comes with a vPro sticker on the front, that counts as a rootkit.

Well, except for the 100s of bugs in vPro that cause it to completely insecure...


These are all solving use cases. What do you suggest hardware vendors should do?


Open their software drivers and publish accurate specifications.


And then what? We implement a dozen different incompatible bootloaders for it? The value in UEFI is that it lets us use a common interface for booting, regardless of how it's implemented.

(To be clear, I overwhelmingly prefer FOSS and open specs, but we need to be clear about what things solve what problems)


Apple created a minimal bootloader for their Apple Silicon machines, seems to be working great.


Apple also controlls their entire software and hardware stack, a bit different


Oxide managed to do something similar as well: https://github.com/oxidecomputer/phbl


Oxide also controls their entire software and hardware stack (theirs is open source, at least)... Neither vendor has to worry about modularity or weird configurations.


The disclosure timeline section is worth a read of itself:

> 2023-11-14 Quarkslab replied to the prior requests and commentary from various vendors as follows: Stated that the blog post about the issues would contain the technical report submitted to the disclosure coordination forum and a detailed timeline of the relevant events in the disclosure process. It would include proof-of-concept code to trigger vulnerabilities 1 to 7 but NOT exploit code. Reiterated that the purpose of reporting the vulnerabilities was to help vendors identify and fix them, not to debate about the editorial policies for Quarkslab research work.


Pretty typical. Dealing with vulnerability reporting and disclosure has always sucked for researchers, and from my (very) limited experience on the vendor side, it isn't much better there. I'm honestly surprised more of these researchers haven't gone back to the bugtraq/full disclosure model.


What more vulnerabilities hides in the proprietary UEFI?

I have no interest whatsoever to install MS Windows, is there any motherboards out there for AMD64/X86_64 that comes without UEFI/BIOS?


> is there any motherboards out there for AMD64/X86_64 that comes without UEFI/BIOS?

Your computer needs UEFI/BIOS for hardware initialization and launching your boot process.

There is a open implementation tho, see https://libreboot.org/

> The Libreboot project provides free, open source (libre) boot firmware based on coreboot, replacing proprietary BIOS/UEFI firmware on specific Intel/AMD x86 and ARM based motherboards, including laptop and desktop computers. It initialises the hardware (e.g. memory controller, CPU, peripherals) and starts a bootloader for your operating system.


> What more vulnerabilities hides in the proprietary UEFI?

Most UEFI vulnerabilities seem to come from bugs in the open source reference implementation making it down to the proprietary firmware.

That said: as much as you would expect from any low-level C program. I hope that EDK will eventually find itself rewritten in a safer language to it easier to spot mistakes. I would say Rust would be the best fit, but I admit that it's not a perfect fit; sadly, we lack safe low-level languages.

As for running without UEFI/BIOS: most ARM devices have an alternate bootloader (usually uBoot with some proprietary magic) but I don't know if that's much better. This approach usually requires per-device support from the operating system you want to install.


> I would say Rust would be the best fit, but I admit that it's not a perfect fit; sadly, we lack safe low-level languages.

Rust is a safe low-level language. You can start writing your UEFI implementation in rust today [1], [2]

> As for running without UEFI/BIOS: most ARM devices have an alternate bootloader

UEFI/BIOS is not a bootloader, they initialize hardware and then pass on responsibility to a bootloader. Popular bootloaders are the Windows bootloader, grub, etc etc.

1: https://rust-osdev.github.io/uefi-rs/HEAD/

2: https://github.com/rust-osdev/uefi-rs/tree/main/template


Rust is _a_ safe language, but if you're interacting with a lot of low-level hardware, you end up with a lot of unsafe {} code and manual analysis that takes away some of the advantages.

I see Rust as a C++ replacement more than a C replacement. I'm not sure if object oriented languages (well, "struct with functions" in Rust's case) with vtables and dynamic allocation and the like are a good match for that kind of code. Rust, as a language, simply can't deal with running out of memory; all allocations are assumed to succeed, which is a dangerous assumption, especially in low-level code like this.

Unfortunately, we don't really have a common C replacement with Rust-like safety guarantees. I'm hoping Zig will be able to provide some safety to the C ecosystem, but it's still a ways away from 1.0.

> UEFI/BIOS is not a bootloader

You're right, I meant "bootstrapping system". I'm sure there's better terminology here but the word escapes me at the moment.


Rust, as a language, doesn’t know anything about heap allocation. The “running out of memory” stuff you’re talking about is a feature of some APIs of the standard library, which you often completely forego in an embedded context. Or you can use the (still unstable, to be fair) APIs that return results instead. This is what the Linux kernel is doing, for example. It was a hard line for Linus, but didn’t prove to be an obstacle in the end.


Rust as a pure language perhaps not, but Rust as an ecosystem, has a standard UEFI alloc/dealloc system: https://github.com/rust-lang/rust/blob/master/library/std/sr...

One could, of course, write their own allocator and heap tracking system as part of the UEFI firmware, or reimplement the entire thing without any of the standard library and simply relying on the core library, but if you want to deal with fallibility, you'll have to reimplement every part of the language toolkit and use Option<> or Result<> for every API call, which would only complicate the rewrite more.


Yes, that is part of the standard library interfaces I was referring to.

> you'll have to reimplement every part of the language toolkit

You shouldn't need to, those APIs already exist. Like https://doc.rust-lang.org/stable/std/boxed/struct.Box.html#m... for example. Maybe there's some coverage missing, I personally use no heap when I'm working in this context.


Some piece of code has to configure the CPU, initialize memory before you can even think about loading an OS...



Framework has open sourced the EC firmware, not the boot firmware. The UEFI implementation is still closed source proprietary InsydeH2O software.


Yes, but at least they're making soothing noises about going in that direction. Here's notes on coreboot work: https://community.frame.work/t/responded-coreboot-on-the-fra...

The chromebook variant already runs coreboot, with all changes upstreamed: https://frame.work/blog/introducing-the-framework-laptop-chr...


I do like the direction that Framework is going and I hope they will deliver to more countries during 2024.



My Librem 14 laptop came with coreboot/Pureboot.


Did you read the article about how this is a bug in the open source reference implementation ?

You're going to need firmware anyway, otherwise your OS implements the firmware responsibilities, and you then GOTO 1, and have other problems.


So, does that mean, that I may be able to "root" (my own) mainboards using (only) a maliciously configured DHCP server, and then when booting the target machine trigger a PXE boot?

That would be cool for many different reason! And pretty fugly for some others...


UEFI was shoved down our throat, but today I can't think of one example where UEFI is better than BIOS, other than the partition size limit.


UEFI allows you to programmatically change the boot device from inside the OS. This is important for failed or degraded SSDs, as they can fail in weird ways (read only, all reads hang, etc) such that trying to boot from them can hang the system. That allows you to detect a failing boot driver, and gracefully fall back to booting from another mirror/raid member and avoid having to have somebody touch a server by hand.

There are plenty of other firmware systems that allow for the same thing, and I dislike UEFI as much as the next guy, but this is a feature that's been important at work.


efivars is probably the goofiest and most uncomfortable way of handling this. They created this giant wasteful infrastructure in EFI and at the end of the day, you're still poking around in a bespoke filesystem to hopefully communicate GUID tagged facts about your boot drives to your bios.

There were tons of other ways this could have been accomplished for far less effort and with previously existing systems.


For me:

* Not having "extended partitions" as a concept to deal with

* OS booting not dependant on "load an MSDOS bootblock from LBA offset X"

What I dislike:

* Not having a minimal straightforward shell to troubleshoot from

* Not letting the UEFI variables be easily user accessible/fixable


> Not having "extended partitions" as a concept to deal with

You can BIOS boot from a GPT-partitioned disk just fine.

> OS booting not dependant on "load an MSDOS bootblock from LBA offset X"

The only offset that's hardcoded by the BIOS is LBA block 0. The rest is from your bootloader. And GPT partitioning allows you to put the bootloader code in a proper partition rather than in the void under the stairs: https://en.wikipedia.org/wiki/BIOS_boot_partition

> Not having a minimal straightforward shell to troubleshoot from

Download shellx64.efi from the (Intel) EFI development kit. When it's present in the root directory of your EFI System Partition, most firmwares will include an option to boot into that shell. Or is your objection that it's an DOS-inspired command prompt rather than a Unix shell?

> Not letting the UEFI variables be easily user accessible/fixable

Don't know about this one. The linux kernel supports reading and writing EFI variables through efivarfs, but I don't know if that is predicated on firmware support and whether it includes access to all variables.


> * Not having a minimal straightforward shell to troubleshoot from

There is a shell for UEFI. It feels a lot like a reimplementation of MS-DOS shell, with "FS0:" to change drives etc, but it has directory listings, file copying, etc.

The interface is standardized and there's an open source implementation in EDK2.

If you build Linux kernels as "EFISTUB", they're simply executables in the UEFI shell.

https://github.com/tianocore/tianocore.github.io/wiki/ShellP...


The changes in UEFI are mostly for hardware manufacturers. There's only so much user facing functionality in boot firmware. One other thing it did better enable, through better support of modern hardware and the standardized extensibility, is GUI firmware interfaces. If you get into fancier enterprise hardware use cases the modules are also nice.


You can use a BIOS with GPT (which remove partition size limit using LBA48) : see EDD 4 page on 4 https://www.fpmurphy.com/public/EDD-4_Hybrid_MBR_boot_code_a... : "Hybrid MBR boot code overview: This annex describes how MBR boot code called hybrid MBR boot code may be constructed to support a GPT disk layout (see UEFI-2.3) in a legacy BIOS system."

I'm working on that for one of my projects. A BIOS can use GPT partitions to have LBA48, but MBR partitions standards could laso be changed to use LBA48 instead of LBA32 which cause the limit

You could even use the CHS fields to have even more space if needed: there are 3 bytes for the CHS start, 3 others for the CHS end, both are often wasted by storing 0 for LBA32.

That said, I still consider UEFI to be better for at least 1) multiplatform support (have different /EFI/BOOT paths to payloads named by their architecture) 2) fallbacks (shell + startup.nsh) 3) defining how the persistence of settings work (efivars)


Secure boot more or less prevents rootkits entirely



It's all so tiresome. Just write these parts in Rust already.


Even better, dump that UEFI crap. Not even Rust can fix stupid.

Just directboot a static kernel and kexec() the real kernel -- like petitboot has been doing since forever: https://github.com/open-power/petitboot/blob/master/README.m...

Unless you like writing everything (display drivers, disk drivers, filesystem drivers, network stacks...) twice. Which is where these bugs come from...


You'd need to either have drivers in the kernel for everything, with no common fallback ("the ARM approach"), or have standards at the hardware/HW firmware level. Having a firmware-level abstraction layer that papers over this for fallback may not be the best idea, but it works without too much politicking.


I would like to be able to ensure that only boot loaders signed with my private key can be executed. Secure Boot serves that purpose well, can I do that with your approach?

Likewise, demand and use cases for network boot exist, otherwise it wouldn't be here. Same goes for every other feature most users would consider bloat.


Yes of course you can. Just run `signify -V` in userspace under the pre-kexec() kernel to check the signature on the post-kexec() kernel/initrd.

You can network boot too; just run `busybox udhcpc`.

I think you misread my comment. I never described signature-checking or network boot as bloat. I said it was stupid to have to implement these things twice (once in mainline Linux and then all over again in kooky UEFI-land with its bizzarre API, ABI, and wacky rules).

I still think it is stupid to do that, because it is. We have working, high-quality, battle-tested implementations of all this stuff. Use them.


I think Oxide's computers don't use UEFI, but I don't know what they replaced it with. Since they are a Rust shop, maybe something written in Rust, but if not it would also be interesting to know and to know the reasons why.


Others have already given good answers but also here’s a talk about this subject: https://www.osfc.io/2022/talks/i-have-come-to-bury-the-bios-...


They have a custom bootloader https://github.com/oxidecomputer/phbl that runs without AGESA or any of the normal stack of stuff. From their podcast it sounds like it was quite a bit of work to get it working.


Oxide co-founder Jessie Frazelle published an ACM Queue article about the security advantages of open source firmware in 2019:

https://queue.acm.org/detail.cfm?id=3349301


Yeah, if I remember correctly even AMD was amazed of what they did.


It works for them because they own the complete hardware stack, and boot only their hypervisor which is also something they control.

TL;DR you can make it simple if you close the platform (this is not the same as open source Vs closed source) - open platforms end up developing complex interfaces so thatthey are actually open to end owners.


So here we have security vulnerabilities in an open source UEFI implementation, resulting in a thread about how UEFI is too complex. So your solution... is to use the actual Linux kernel plus some glue code as your boot firmware? I love Linux, but it is neither simple nor free of security vulnerabilities; what exactly does this buy us?

Edit: On reading another reply, I see your argument for having fewer implementations; that's not nothing, but it only really helps if everything is on Linux, which isn't going to help most desktops (and I hope we can agree that NT in firmware is not better) or the rest of the world that isn't on Linux (say, the BSDs).


This petitboot thing is very cool. Thank you for sharing this.


Just don't make something so complicated just to boot an OS. The damn thing is an operating system on its own now.


I used to think like that but I changed my mind.

I don't say UEFI is no over complicated, over engineered mess but I worked on embedded systems for a while and I'd say it's no sunshine there either. Booting a modern OS on modern hardware involves a lot of essential complexity you cannot get away from.


The design of UEFI also involves tons of non-essential complexity.


Sure, but it is easy to keep essentially simple things simple; accidental complexity is unforgivable in this case.

Systems that are inherently complex tend to naturally accrue accidental complexity. It's understandably hard to keep these systems lean.

My point is that modern boot systems in general fall into the second category, contrary to other comments that imply the first.


I disagree. UEFI makes my life, as an end user, so much easier. The amount of times I've needed to buy a different flash drive because the BIOS of a particular motherboard didn't like the way my existing flash drives smelled has gone down to 0. The thing even supports filesystems _not_ formatted by a tool written for Windows 95 that arranges the FAT partition in _just_ the right way to get it recognised.

Previously, we had motherboards do the same stuff, except now they called into ROM memory on the network card, and ran a bunch of non-standard, proprietary code to render the fancy graphics and do online updating. I don't believe for a second that things were easier before UEFI standardised it all, they were just hidden from plain view better.


It helps that unlike with BIOS, the modern tooling doesn't have huge amounts of "we never read the spec but it booted on my machine" that was prevalent even with grub2.

TL;DR a BIOS doesn't have to boot a drive if there's no primary partition marked "active", no matter whether your boot code uses that or not.


It's arguably less complicated than late stage BIOSes were, and I'm not including UEFI Class 1 systems (what many systems people thought still had BIOS actually were)


It just is that complicated to bring up all the pieces of hardware in a timely manner and with the right configuration. Not to mention if you want to let user configure things. Just look at say DRAM init: https://www.systemverilog.io/design/ddr4-initialization-and-... Of course you're gonna give that hard task to a memory controller, but then you have to initialise that. This goes on for all the subsystems, with complex interactions and features, all of which need handling.

It's not a single microcontroller you're powering on, even if it were, those too require a bunch of initialisation code to function properly.


DRAM init as little to do with UEFI or BIOS and Intel probably ships x86 components for more exotic or even allow custom bootloaders. At least at a time they did. I don't know the current situation but when I played with all of that, I saw one of their "Memory Reference Code" in particular the parts in charge of calibration of the memory controller timing, and that was a quite short piece of x86 assembly completely independent from the UEFI architecture insanities. Well because it was written in assembly it had an obvious bug, but that's another story.


I agree, but Rust would've still had several of these vulnerabilities. The integer underflow would've passed in release mode (though you could argue that anything running this low level should spare the extra CPU cycles for checked arithmetic), the infinite loops hanging the system, and the predictable TCP sequence numbers.

I'm not sure about the weak pseudo-RNG, because I would expect an existing crate to get imported for that use case, but the same RNG could also be implemented just as badly in Rust.

As for the buffer overflow vulnerabilities: I completely agree. These are the most dangerous vulnerabilities and I doubt they would've made it past the compiler had they been written in Rust.


Rust won't magically fix every vulnerability and someone would have to pay a team of engineers to rewrite everything.


> someone would have to pay a team of engineers to rewrite everything

A partial effort has already been made a while back: https://github.com/tianocore/edk2-staging/tree/edkii-rust

However, this uses uefi-rs, which is incompatible with TianoCore's BSD+Patent licensing, and therefore cannot be used as reference material, as the wiki page states: https://github.com/tianocore/tianocore.github.io/wiki/Tasks-...

More recent efforts have also been mentioned in the mailing list: https://edk2.groups.io/g/devel/search?p=recentpostdate%2Fsti... Rust also has a basic standard library implementation now: https://github.com/rust-lang/rust/pull/105861


Some of the challenges in adding Rust to EDKII are described in https://cfp.osfc.io/osfc2020/talk/SLFJTN/. There is some more recent work in this space described in https://microsoft.github.io/mu/WhatAndWhy/rust/, too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: