You're probably getting downvoted because you did not follow up with "...because $reasons", so your post comes off as petty Linux hate (is there such word as "anti-fanboyism"?).
In the context, most other posts here contain no technical detail whatsoever either and are nothing more than unsubstantiated opinions (same as mine); they just really dislike that there is someone out there who doesn’t think that Linux is phenomenal. In the days of Microsoft dominance, we called that monoculture.
Volumes have been written and videos filmed on all the inadequacies of the GNU/Linux kernel, far more than I could cram into one “Hacker News” post. I for example get a painful reminder of just how unfit the Linux kernel is as firmware every time I turn on my television set which runs it (ARM V7 Linux for the curious). After that, I don’t want any more. That is not an isolated scenario.
Because people have been doing it since the 90s. Google use this on their Chromebook. Some of the people working on Coreboot, Heads, NERF have been doing it for a long time and they all seem to agree with each other.
Also, please tell me why the Linux kernel is so bad, but the BSDs are not? They are not that different and its hard to argue that they are much saver in terms of bugs (just look at the BSD talk at 34C3).
How long "people have been doing it" has nothing to do with the quality, it's a fallacy. They can't code if they can't even get the basics like polling or startup / shutdown correctly.
While both might be true, it does not change the fact that Linux is not a good choice for firmware because the product is simply bad. And asking for links (I did originally write videos and I meant those exact ones) and then complaining about having to watch through them is exactly what’s wrong with our industry. That kind of mentality spills over into code quality or lack thereof. It is high time to start owning up to it and change course.
You also mentioned volumes that have been written. And those would be greatly preferable to podcast or videos, which are some of the worst forms of information sharing, with low density, inability to process it at one's own pace and several other reasons.
Unless you want to discuss something visual where showing things on camera helps greatly with delivering the subject matter, you're much better off writing an article or two. But if the video is mostly about looking at people's faces as they talk and flashy intro animations, then you're just wasting people's time and attention. Again, why not just write an article and include photos of people involved?
Have you ever wondered why you’re always so angry? I looked at your comment history and your bio... sheesh man, please seek out a mental health professional. It’s not normal. I’m gonna take a wild guess that you have had lower back pain for years too. Not trying to be a prick, just a wake up.
Had you had grown up on something so cool and elegant as a Commodore Amiga or an sgi Origin 3800 and IRIX and now you find yourself stuck on a PC bucket server forced by someone else to run Linux, by knowitalls who’ve never known anything else, you’d be pissed off too. People tend to become resentful when they have to regress to something worse and it was not their choice.
So Linux (there is no GNU/Linux kernel, but a GNU/Linux OS) is more popular than how much you'd like it to be, and is that the problem? I for one am glad GPL-licenced free software is running on as much platforms as possible. In those cases Linux is basically a library they pick for their work. You can well say the same thing for glibc, gcc, apache, nginx etc. But in the end-user space nobody is forcing you to use Linux, and that's what monoculture is, not the choices regarding firmware with which the end user is not meant to ever interact.
So Linux (there is no GNU/Linux kernel, but a GNU/Linux OS) is more popular than how much you'd like it to be, and is that the problem?
That's a huge problem for me, because I get stuck dealing with problems solved in traditional UNIX operating systems anywhere from 30 to 20 years ago. It's extremely depressing to have to regress. If the future is Linux, then I want no part of such future.
But in the end-user space nobody is forcing you to use Linux
No? Then why do most companies today force me to work on Linux by insisting on running it? Why am I told in interviews "nah, they don't want to try ZFS or SmartOS... they're Linux people".
The delivery medium is irrelevant. And yes, this particular video is a good source, since the person in the video is an authority on kernel engineering; his teams have managed to deliver a fully functional storage appliance, a volume manager/filesystem from the future, infinitely extensible kernel and userspace debuggers, a dynamic tracing framework, a very high performance operating system which for more than two decades was the textbook on large scale symmetric multiprocessing, and a large scale cloud solution which mops the competition in efficiency and design of use. Oh, and a parallel startup/shutdown mechanism as part of a larger self-healing framework. I have a sneaking suspicion that this person might know what he’s talking about after having written and debugged a good portion of that code. I’m not putting in Linux as firmware because I already have it in the products and infrastructure I use and not only is it piss poor slow and inefficient and crashes all of the time, but greenhorns who think they know best keep introducing compatibility-breaking changes. Out of the question, no more. And polishing a piss poor solution is no solution either. Start with a solid foundation, which excludes Linux immediately from the consideration.
Oh, the delivery medium and medium are relevant. Not to you, that much is clear. But we've already established your viewpoint is an outlier. The person you're referring to is Bryan Cantrill [1]. For each expert like him, there are also Linux kernel experts, so I am not buying that one [2], sorry. He's been able to rant successfully in text, as you can read on the Wikipedia page. No idea why we'd have to see extremely long videos of him. I'm sure a shorter, to the point argument can be made by him. So if Cantrill (& friends) could sum it up in 10 min read in a 2018 sauce, that'd be great.
Literally nobody cares that you don't wanna do that with Linux. That's like your problem. The world needs practical solutions; not *NIX wars or zealotry.
There are practical solutions: FreeBSD, OpenBSD, SmartOS. Linux for firmware is not one of those, but if you think it is, good luck with using it for that. Don't bother to let me know how it worked out for you.
On other hand - a common partition that is shared among all operating systems has too high chance of getting corrupted or simply blown away by another install.
Just yesterday - RHEL7.5 beta install overwrote UEFI entry of Ubuntu for me and Ubuntu became unbootable. On another laptop - Fedora blew away my Windows bootloader from UEFI partition and I have been too lazy to recover it.
Also - because UEFI partition actually contains things needed for booting these operating systems - it is almost always non-trivial to restore.
Right. I think it exchanged problems we know, for problems we don't know. And now we're learning about those problems. Or re-learning.
With respect to NVRAM boot entries, Apple has been doing this for ~30 years and across all of that hardware (three CPU architectures, at least three firmware and three or four filesystems) they have a single method of resetting the NVRAM called "zapping the PRAM" with command+option/apple+p+r at the boot chime. And that's because they know these entries can become corrupt for various reasons. And yet in 2018, what single uniform method exists for clearing stale or corrupt NVRAM information on non-Apple hardware?
Apple also long ago inserted a hint for likely boot device suspects into the HFS volume header, as a fallback for NVRAM in case it was incorrect or had been reset with zapping the PRAM.
With respect to the EFI System partition, Windows and all versions of OS X/macOS do not keep it persistently mounted. That volume is mounted on demand only during updates that require the contents of the volume to be updated, i.e. the bootloader and its files, and is then unmounted. I've always thought this was an incredibly risky as well as lazy thing on all common Linux distros that the EFI system partition is persistently kept rw mounted at /boot/efi.
Somewhat related is when /boot is a separate file system from root, it is also persistently mounted rw on Linux distros. The functional equivalents on Windows and macOS are not persistently mounted. I think it's a rather sloppy practice, but pretty much no one else seems to think it's a problem. You get a crash, and the dirty bits are ignored by read-only firmware and bootloader, so chances are it still boots, and then the dirty bit and any corruptions due to the crash are fixed by fsck during startup. shrug
You can have multiple EFI system partitions on the same disk. The Windows installer does not like this, but it won't affect the actual boot process after it's installed. The Linux distros I've seen don't mind at all, since /boot/efi is usually mounted by UUID in your fstab.
No you really can't. There can only be one ESP and that's in the spec. You can have multiple partitions each assigned as /boot for a different OS, but there can only be one ESP which is used by the firmware to store its settings, etc.
The EFI spec [0] is officially silent about the presence of multiple EFI system partitions on non-removable hard drives, but explicitly forbids multiple ESPs on removable disks, per §11.2.1.3:
> For removable media devices there must be only one EFI system partition, and that partition must contain an EFI defined directory in the root directory
But in practice, Bad Things (TM) happen if you have more than one ESP on the boot disk.
An ESP isn't just a partition mounted to /boot or one that has bootfiles or a bootloader, it's (in practice) a FAT32 partition that has a different filesystem ID, in particular, the magic GUID {C12A7328-F81F-11d2-BA4B-00A0C93EC93B}
I'm the author of EasyBCD and numerous other boot utilities and spent a hell of a lot of a time (too much time) researching and working around the various UEFI deficiencies in various desktops, laptops, firmware implementations, bootloaders, and operating systems.
(long story short, the Windows kernel may bork if it runs into multiple ESPs and fail to load. Our bootable boot repair CDs wouldn't boot into the WinPE kernel because the disk management subsystem would hit an infinite loop in certain cases (not this case). We ended up switching to FreeBSD for our live CDs and writing our own disk repair subroutines to get around cases where even Windows live CDs wouldn't boot so we could fix the customer's PCs)
Also, maybe you can answer this: what's the 'correct' way for systems to handle multiple disks with an ESP partition each? In other words, if I'm using software RAID and clone the partition tables, should I assume that the UEFI firmware will default to the first disk's ESP, or is that set when boot configurations are updated?
> if I'm using software RAID and clone the partition tables, should I assume that the UEFI firmware will default to the first disk's ESP
With regards to your question: I guess it depends on what you mean by "software" raid. Enabling RAID in the "BIOS" (that term has now come to encompass the firmware configuration utility for UEFI motherboards as well) (typically Intel "RST" rapid storage RAID, nvraid for nVidia chipsets (less common these days), AMD raid, etc.) is 100% software raid, which does nothing more than make the BIOS aware of its presence (and toggle a flag so that RAID drivers can find the array) so that it doesn't run into exactly the problem you describe, i.e. it'll be aware of the cloning at a level higher than the UEFI bootloader init, which will only see the virtual raid volumes rather than their independent components.
Now if you put all that aside and are just wondering how it selects in the presence of multiple ESPs on different devices that made it through to the bootloader layer: the UEFI "BIOS" doesn't know (and can't know) since UEFI has done away with the concept of drive order entirely. You can set the order manually in the "boot device order" section (typically on the boot tab in the BIOS config) or choose the one-time boot target via F11 or F12 at the boot-up screen, but otherwise it's just a tossup and the BIOS is free to chose any local device's ESP as "the" ESP. In practice, it'll choose one of the SATA drives (assuming a mix of other interfaces such as NVMe), either the first to respond (not an issue with SSDs which respond quickly, but magnetic media takes time to spin up if you're still using old HDDs) or the first to be enumerated by the SATA controller (which has its _own_ firmware and its own level of abstraction and logic (and bugs) and whatnot).
BUT in the case of software raid, that RAID 0 should actually extend to the GPT itself and both drives will have the same content and mirrored ESPs, so in practice it won't matter which is loaded (so long as your OS doesn't do anything stupid like try to modify the on-disk structure before loading the RAID-aware storage driver. I've seen BIOSes that "helpfully" store a copy of the current firmware to a locally-attached disk with an understandable partition format (so FAT32) that ended up breaking a RAID because the drives were no longer in sync because the BIOS wasn't aware of the mirror).
> Hard drives may contain multiple partitions as defined in Section 12.2.2 on partition discovery.
Any partition on the hard drive may contain a file system that the EFI firmware recognizes.
Images that are to be booted must be stored under the EFI subdirectory as defined in
Sections 12.2.1 and 12.2.2.
It definitely sounds like UEFI mandates support of it for internal drives (though I'm sure many implementations don't conform).
Empirically I've had this work with two ESPs (one Windows 10, installed first and one Linux, installed after). I've also done this with two Linux distributions with even less fanfare. I don't know if I just got lucky that my firmware supported it and I didn't have an Windows update that tried to mess with the loader or not.
I understand the difference between an ext2/3/4 /boot and FAT ESPs, thanks.
> It definitely sounds like UEFI mandates support of it for internal drive
I can see why you would interpret it that way, but the section you quoted doesn't say that.
The spec for the ESP is based off of FAT32, which means that in practice, all firmwares can read any FAT32 partition. While the ESP must be FAT32 (with the different filesystem GUID), any random FAT32 partition is not an ESP.
I really don't mean to overexplain this or belabor the point (forgive me if it comes across that way, but perhaps bear with me, too), but a FAT32 partition may be a boot partition without being the ESP (just as an ext4fs partition might be).
The parts of the spec you quoted are not referring explicitly to an ESP, but just "any recognized partition" which also includes non-ESP FAT32 partitions. In particular,
> Hard drives may contain multiple partitions as defined in Section 12.2.2 on partition discovery.
I just double-checked and §12.2.2 refers to generic discovery of all partitions, not just the ESP ("This specification requires the firmware to be able to parse the legacy master boot record(MBR) (see Section 5.2.1), GUID Partition Table (GPT)(see Section 5.3.2), and El Torito (see Section 12.2.2.1) logical device volumes.")
> Any partition on the hard drive may contain a file system that the EFI firmware recognizes
Already addressed, but also includes support for other filesystems so that a, say, consumer electronic with a custom filesystem load its OS from said device, is not out of spec.
Thank you, you literally saved my life when I tried to get my Windows Vista and Linux partitions working again. Some Windows Vista update or something corrupted the boot loader or something and your software fixed it.
Hey Rick, thanks for taking the time to leave that comment. It means a considerable amount; helping people out is the reason why EasyBCD is free (and with over 50 million downloads to date!)
From my reading of the UEFI spec, much of the language assumes one EFI system partition per device, for example see the capsule updating language on page 262 of the version 2.4 spec.
The directory \EFI\UpdateCapsule is checked for capsules only withing the EFI system partition on the device specified in the active boot option determine by reference to BootNext variable or BootOrder variable processing.
and later
The system firmware is not required to check mass storage devices that do not contain boot target that is highest priority for boot nor to check a second EFI system partition not the target of the active boot variable.
There's no accounting for two ESP's for one device, and how to resolve the ensuing ambiguity, which one is the primary one? Are you certain Windows updates distinguish between two EFI system partitions, should an update need to update the Windows bootloader or its config?
Actually, it's even worse than that. As the BIOS no longer has a concept of "active device," multiple ESPs on separate devices are also a problem; the user would have to actively pick the one to load at boot time via the non-EFI "boot device menu" at the boot screen.
A big one is GPU passthrough. Before graphics cards supported UEFI the old VGA BIOS was a nightmare to get working with virtualization. Now with a tiny bit of configuration you can get consumer GPUs to work just fine in virtual machines.
Maybe for AMD GPUs, but getting consumer Nvidia GPUs working with VT-d on Nvidia's drivers is a little difficult. GPU passthrough is not something Nvidia wants to support on consumer GPUs.
I've had no problem using consumer Nvidia GPUs with QEMU/KVM for this. All I needed to do was (1) use kvm=off (this does not turn off KVM, it just sort of hides it from the NVidia driver) and (2) on Windows, edit the registry to force enable Message Signaled Interrupts. When I tried it with an AMD card it would periodically crash the host when it was assigned to a Linux guest.
> I can now have a normal partition to put my boot loaders in rather than a hidden chunk at the beginning of the disk.
you can make a /boot with BIOS too. format it with FAT and nowadays all bootloaders can access it. in practice, I'm pretty sure GRUB can boot everything but Windows and Mac anyways, so it doesn't really matter.
> I can easily update, add and remove boot entries from the OS command line.
again, grub handles this. in fact, it handles it much better than some firmwares, which usually have bugs relating to, among others, random rearrangement of boot order, failure to persist changes, and permanently adding entries for every flash drive you plug in. these have never happened to me with grub, and I do not expect them to. OTOH, I expect to see stupid bugs caused by firmware vendors de facto supporting single-drive Windows only, at least until the end of the PC platform as we know it.
> I can forgo bootloaders entirely and use Linux as UEFI application.
I guess this is nice, but honestly three binaries wasn't significantly worse than two.
> I can use GPT and finally partition as much as I want.
you can use GPT with BIOS too, it's just that MS doesn't feel like testing it so they lock it out for everybody.
/boot is a partition that your boot loader needs to know about ahead of time, and I still have to point my BIOS towards a disk that has the correct code embedded in a section at the beginning of the disk in order to execute it to read its config to know about /boot.
With UEFI, the pre-boot system can automatically detect OSes that are installed on any disks that exist, and can let you choose which ones to run before executing any other code.
This eliminates the issues of e.g. installing Windows (with its boot loader) and then Linux (with its replacement boot loader) overwriting that and then having to add an entry to boot the old OS, and so on. Now all OSes and boot loaders are accessible from the same level and OS installers don't have to worry about wiping out the other ones unless they're deleting partitions.
> again, grub handles this. in fact, it handles it much better than some firmwares
I have had significant issues with Grub when, for example, migrating from one disk to another, having to swap boot drives, change kernel boot parameters, run chroots, and so on, all to make sure that Grub puts the right code at the start of the right disk to point to the right partition ID to read the right config file to load the right kernel.
Nothing is perfect and foolproof, and Grub on UEFI is vastly better than Grub on BIOS/MBR/etc. in my experience.
> With UEFI, the pre-boot system can automatically detect OSes that are installed on any disks that exist, and can let you choose which ones to run before executing any other code.
the boot entries are stored in the firmware, not on the disk. therefore, any new disks can only boot the default binary (\EFI\BOOT\BOOTX64.EFI usually). so, if you regularly get new drives, you need to install a boot manager anyways, which is the same as the BIOS experience in the end, except with more chances for the shitty shitty vendor firmware to fuck something up.
> This eliminates the issues of e.g. installing Windows (with its boot loader) and then Linux (with its replacement boot loader) overwriting that and then having to add an entry to boot the old OS, and so on. Now all OSes and boot loaders are accessible from the same level and OS installers don't have to worry about wiping out the other ones unless they're deleting partitions.
1. grub 2 auto-detects Windows by default, and other boot loaders required you to manually write entries anyways; if you forget to add the Windows entry while you're in there anyways, that's your own problem (and no big loss anyways).
2. as I said, the firmware boot entry manager is often if not usually total shit. even better, Windows (or probably some poorly-written vendor drivers that assume you only have Windows) has been reported to fiddle with boot entries even after installation! at least in the BIOS system, I set it up and it worked. now I have to reconfigure it every time I boot into Windows?
> I have had significant issues with Grub when, for example, migrating from one disk to another, having to swap boot drives, change kernel boot parameters, run chroots, and so on, all to make sure that Grub puts the right code at the start of the right disk to point to the right partition ID to read the right config file to load the right kernel.
If you configure your /etc/fstab correctly, none of these steps are necessary. literally no configuration is necessary if you just use the "dd" command. if you use "cp", obviously you will need to reinstall the boot loader. this is the only thing that the ESP does better (as long as you remember to copy the ESP contents and not just the main filesystem, but that's not too hard).
Also GPT partitions have little use when using mdraid/dm-crypt/lvm2 (or zfs) stack. IIRC Windows have similar features with dynamic partitions, so I don't see any reason to use bios partitioning for anything beyond boot (and MBR is enough for it, even without extended partitions).
GRUB on GPT+BIOS requires the use of a BIOS Boot Partition too (type code ef02 in gdisk... see the full UUID for the full story). So one of the points is moot as well.
I (see footnote) started off hating the UEFI because it changed everything for me and others working with bootloaders, but at this point I'm just ambivalent about it. It trades one set of problems for another. The biggest problem is with shoddy implementations and poor development/engineering practices by hardware and software vendors (I have some devices lying around from R&D that brick themselves if the ESP is erased; you can't even access the "BIOS" even if you remove the CMOS battery, can't boot from a removable disk, nada. Need to insert a physical, non-removable disk with an ESP compatible with that PC or mail it back in to the vendor for an RMA!)
That said, your argument really boils down to a straw man. None of your points are really in favor of UEFI and some show a misunderstanding of the previous situation. I really recommend anyone that wants to better understand the (traditional) boot process have a read of this guide we wrote, complete with flowcharts and breakdowns, entitled "Everything you ever wanted to know about how your PC starts up (but were too afraid to ask)": https://neosmart.net/blog/2015/everything-you-ever-wanted-to...
Anyway, to address your remarks:
1) I can now have a normal partition to put my boot loaders in rather than a hidden chunk at the beginning of the disk.
That has always been the case, at least any time in contemporary PC history. Ever since bootloaders grew past the 1 sector limit, the only code at the start of the drive has been a pointer (that can even be copied and pasted between Windows and Linux, etc. because it serves a universal purpose) that looks for and loads the first active, primary partition on the drive. That is where the actual bootloader code resides.
2) I can easily update, add and remove boot entries from the OS command line.
This isn't anything to do with UEFI either, it's just a matter of your OS providing an interface for doing so. Microsoft did that back in 2006 for legacy bootloaders with bcdedit (that's where our free EasyBCD tool comes in) and we've developed standalone cli utilities for doing the same for GRUB under Linux.
3) I can forgo bootloaders entirely and use Linux as UEFI application.
Sure. But that bootloader code has been integrated into the OS and the firmware. Really it's just reduced the abstraction and increased blurring between the layers. Makes it hard to swap out components.
4) I can use GPT and finally partition as much as I want.
GPT is independent of UEFI. Many operating systems support the use of GPT partitioning in a BIOS environment (see Windows and FreeBSD, until recently, FreeBSD defaulted to doing so. Not sure if it still does, would have to spin up a VM and check against FreeBSD 12-CURRENT). GPT is a partition management scheme, UEFI is a bootloader scheme that requires GPT (interestingly, while it requires GPT to be understood, it does not require GPT to be necessarily used and can actually function with MBR drives, though in practice this does not happen. Mostly.). In addition, the GPT spec is purposely backwards compatible with BIOS by reserving the first 512 bytes of the disk (where the 446 bytes of MBR bootloader code followed by the 66 bytes of partition descriptors would be located), which can be used to point the traditional "BIOS bootloader" to a GPT partition to serve as a shoe-in.
(I'm the developer of EasyBCD [0] and numerous other boot utilities that deal heavily with EFI/GPT dark magic [1])
I learned more from this comment than I did from 6 months of trial and error installation/provisioning/partitioning of various lab computers (macs, hackintoshes, mac/hack + windows, windows, and BSD OSes). Thanks for all your hard work!
You are most welcome. Thank you for taking the time to leave that comment, it actually means a lot. It often seems as if certain content or research that takes months or years is met with nothing more than a 'meh' when published if it isn't applicable to the current fad-of-the-day :)
I can now have a normal partition to put my boot loaders in rather than a hidden chunk at the beginning of the disk.
I can easily update, add and remove boot entries from the OS command line.
I can forgo bootloaders entirely and use Linux as UEFI application.
I can use GPT and finally partition as much as I want.
I might have forgotten something too. :)