Hacker News new | past | comments | ask | show | jobs | submit login
LinuxBoot: Linux as Firmware (linuxboot.org)
691 points by doener on Jan 29, 2018 | hide | past | favorite | 178 comments



IMO, the problems that [U]EFI introduces (that far exceed the historical limitations it overcomes) should be self-evident.

IMO, he should not have to argue against having multiple, redundant copies of drivers, shells and utilties each accessible only in its own "OS" (UEFI, GRUB, OS). It should not be a debate. This is definitely not "defense in depth". IMO, whomever controls the first OS controls the computer because there is no need for the second and third OS in order to do work (make network connections, move files across the network, etc.).

These "hardware features", whether its [U]EFI or ME or whatever acronym, IMO is a land grab by hardware vendors over what we know as the "OS". Less computer owner control, more vendor control. The sum effect of all these "features" is that verification that something is the way that the owner wants it and has not been modified is far too complex and is ultimately under control of the vendor, not the computer owner.

I do most work on the commandline in text-mode (no graphics layer) and as such I only need one OS, with some basic utilities. When the news came that new computers would have [U]EFI, I considered whether I should just switch from the OS I am using to [U]EFI. It seemed to have all the utilities I would need to do work, along with the ability to extend with new programs.

I only need one OS to boot to a working environment. I should be able to choose that OS. I hope that Minnich and Hudson and others will consider that the user may want to choose a kernel other than Linux as a source for drivers, e.g., BSD, Plan9, others incl. future OS not yet written, etc., even if it today it has inferior driver support compared to Linux, or Intels UEFI, or whatever.

The speaker seemed a bit perplexed when someone in the audience questioned whether Linux is a "TCB". What is and what is not a "TCB" should be the computer owners decision, and not anyone elses. If the computer owner wants to cede authority for that decision to a third party, then she can make that choice. But IMO it should be a choice made by the computer owner, and not anyone else.


This is in the hands of the firmware/hardware manufacturers. AFAIK, for instance, the version of Minix that serves as Intel's ME will continue to be operational even if the UEFI is replaced by Linux.

The issue is that 'firmware' actually means many different blobs of software in various ROMs spread throughout the system, from disk controllers to GPUs to memory controllers to NIC and so on.

You will never be able to verify you have full control over your system unless all these firmware have the ability to be reflashed/replaced. LinuxBoot/Coreboot are a step in the direction, but without major concessions from hardware vendors this is an uphill battle.


That's why it is important to not buy random shiny devices. Open source hardware like for example the Teres-1 [0] are what allows us to stay in control. I can't wait for even more open successors based on RISC-V processors.

[0] https://www.olimex.com/Products/DIY-Laptop/


I wonder why they are using the Allwinner A64 if they are concerned with FOSS though? It's one of the more serial offenders when it comes to GPL violations[1], and it has a backdoor in its custom kernel config that could be used to take over the system[2]. I'm not trying to crap on your efforts, it's just smart to vet any system even if they claim to be FOSS or open hardware friendly.

[1] http://linux-sunxi.org/GPL_Violations

[2] https://arstechnica.com/information-technology/2016/05/chine...


Active mainlining of Allwinner chips is under way.[0]

I only use the devices with kernels from trustable sources, like Debian. The A20-Olinuxino-Micro works already quite good with Debian out of the box. It is even recommended by FreedomBox.[1]

Because mainling for the A64 isn't at a usable state right now, I'm holding back on buying a Teres-1.

[0] http://linux-sunxi.org/Linux_mainlining_effort

[1] https://wiki.debian.org/FreedomBox/Hardware


I‘m sorry, but 99% of the population will prefer a random shiny device that plays their cat video to one that lets them be „in control“ but have them recompile their kernel with the correct wifi driver before it works for them.

With the current ecosystem, it‘s just much cheaper to mix and match commercial off the shelf solutions that are way too powerful for the job than custom building an optimized solution, because it „just works“.

Heck, I‘ll take a Raspberry Pi over an Arduino any day to automate a simple relay, just because with real Go and Linux I can use the same tooling I‘m using every day at work.


> I‘m sorry, but 99% of the population will prefer a random shiny device

That part gives me the impression of a fatalist stance, and I see that waaay too often. When everybody does x, but doing y gives some profit, people are all interested. But if it's doing y for some other reason, then it's always the "that battle is lost", or "the majority will never change". Argh! It is quite irrelevant if the majority buys random shiny devices or not. What matters only is if there are enough customers of good devices.


> I‘m sorry, but 99% of the population will prefer a random shiny device that plays their cat video

I'm sorry, but fuck 99% of the population. They're not the makers or creators, they are already well served by what's out there. I am truly grateful to have open options and other people who realize the power and freedom of truly open hardware. I'll gladly pay the costs to not cede that independence.


> Heck, I‘ll take a Raspberry Pi over an Arduino any day to automate a simple relay, just because with real Go and Linux I can use the same tooling I‘m using every day at work.

I don't think this is a good example as for many people there are far more good reasons to pick up Arduino over RPi in this scenario. With LinuxBoot the question is far more complicated. If there is a vendor using it, and if it's done properly, you won't even notice a difference.


But I don't think the open source hardware is very up-to-date and what is the best choice to use. btw I usually use many virtual machines and I need to do some work about 3D project. I think cpu of Intel is good to hacking on Linux.


> These "hardware features", whether its [U]EFI or ME or whatever acronym, IMO is a land grab by hardware vendors over what we know as the "OS". Less computer owner control, more vendor control.

What control did you have with an old PC BIOS that you now are missing with UEFI? Things like SMM and Intel ME that keep running after your actual OS has started existed and were pretty much ubiquitous before UEFI became common in consumer hardware. They aren't required to implement UEFI, and UEFI doesn't enable them any more than PC BIOS did.

> I only need one OS to boot to a working environment. I should be able to choose that OS.

You weren't able to choose your PC BIOS any more than you can choose your UEFI implementation now. There are PC BIOS and UEFI motherboards that can accept coreboot or something similar, and in all other cases you're stuck with an opaque vendor blob. At least with UEFI you have an opportunity to write your own software that can be part of the boot process, doing so with a BIOS was usually impossible unless you could install an option ROM.

> I do most work on the commandline in text-mode (no graphics layer) and as such I only need one OS, with some basic utilities. When the news came that new computers would have [U]EFI, I considered whether I should just switch from the OS I am using to [U]EFI. It seemed to have all the utilities I would need to do work, along with the ability to extend with new programs.

That's great for you but a lot of people want to be able to install different OSes on their hardware. Having to duplicate the hardware initialization code for each would be a major pain and would probably result in a lot of hardware only supporting one OS (e.g. many consumer motherboards would probably be Windows only in this situation). I really don't see how having the firmware wedded to a particular OS is going to give people more choice.


> What control did you have with an old PC BIOS that you now are missing with UEFI?

I had a PC with a BIOS once. It didn't seem as slick as the Kickstart I'd used for the previous decase on my Amigas. It was also much less configurable/programmable than the OpenFirmware that came on my subsequent PC. My current machine uses libreboot, which is fine but I much prefer OpenFirmware.

tl;dr "BIOS vs EFI" is a false dichotomy, and "it's better than BIOS" is pretty weak praise for a boot system.


The person I was responding to was claiming UEFI was restricting their choices and had "introduced" problems. I don't really see what OpenFirmware, libreboot, etc. has to do with this. I didn't say UEFI was the best boot system ever envisioned (I definitely think they could have done better) but claiming it's some anti-consumer conspiracy in a way PC BIOS wasn't makes no sense.


For what it's worth, I agree with the parent. IMO Microsoft's strong-arming around "SecureBoot" is enough to distrust everything to do with EFI.


Nice to see someone pushing back against the UEFI hate.

I personally don't have a huge issue with UEFI, for me it has made dual/multi-booting Linux and Windows much much safer as I don't have to worry about the boot manager for one OS trashing the boot manager for the other.


50+ points. Amazing.

Anyones guess what this means but at the least I think it shows users1 have opinions about UEFI. I think it is a good thing if computer owners care about initialization, bootloaders, owner control and freedom of choice. Clearly some do care. Hats off to those folks.

1 Besides only this one: http://yarchive.net/comp/linux/efi.html


60 and rising. Can we make it to 70?


Apparently there are quite some differences between between HN & Reddit. For some reason, here you would get downvoted


80 and rising. Can we hit 90?


These "hardware features", whether its [U]EFI or ME or whatever acronym, IMO is a land grab by hardware vendors over what we know as the "OS". Less computer owner control, more vendor control.

As long as you can still run the OS of your choice at the top of this stack of vendor OSs, what do the vendors gain by this 'land grab'? Particularly the HW vendors. I can imagine Apple and MS are happy to lock things down so only their OSs run easily but What benefit does any HW vendor get from from UEFI? Surely the benefit to Intel from IME is being able to say "you can remotely manage and recover a borked server if you have our IME enabled", but I can't think what else it gives them.

I like the idea of a system running hardware that does the minimum of initialisation before running code of the user's choice but that is more due to my being an 8-bit kid and worries of buggy vendor blobs than assuming the HW vendors are being nefarious.


> What do the vendors gain by this 'land grab'?

One thing is that it could be used to as a venue to pursue new recurring revenue streams.

While not exactly the same thing, I've seen some people joke about intel locking owners from cpu features behind monthly subscriptions.

This kind of exploitation requires disallowing owners from fully controlling their devices in order to be effective. The more control vendors have, the more elaborate and exploitative these schemes can be, and the less likely owners will be able to do anything about it.


Well said. I remember a cheaper model of Sony laptop in which you cannot enable hyperthreading though it had same chipset of an expensive model.


> I do most work on the commandline in text-mode (no graphics layer)

I'm pretty sure there is some graphics layers even in your work environment.


VGA textmode. I stopped using X11 many years ago. Theres no graphics drivers compiled into the custom kernel that I use (not Linux). The term "no graphics layer" seemed like adequate shorthand.


BTW, on the flip side of the UEFI haters, i.e. things that I can do with UEFI more easily than BIOS (or at all).

I can now have a normal partition to put my boot loaders in rather than a hidden chunk at the beginning of the disk.

I can easily update, add and remove boot entries from the OS command line.

I can forgo bootloaders entirely and use Linux as UEFI application.

I can use GPT and finally partition as much as I want.

I might have forgotten something too. :)


With LinuxBoot you can use any filesystem that Linux supports, not just FAT.

You can update boot entries by editing shell scripts, rather than manipulating opaque NVRAM variables.

You can run Linux applications straight from the ROM if you want to do that.

You can avoid legacy partitions entirely and use LVM for flexible volume management.

And...

You can build it yourself and verify that the reproducible build matches what others have built to ensure that the firmware is clean.

You can have the firmware attest to you via TOTP that it hasn't been changed.

You can have a fully encrypted disk, with secrets sealed in the TPM and only unsealed if the firmware is unmodified.

You can include device drivers for things that UEFI doesn't support.

You can use external hardware tokens like a Yubikey to sign the OS install and have the firmware validate the GPG signature.

Or what ever else you might want to do...


All very good and very valid points. Just wouldn’t want to do that with Linux. OpenBSD or FreeBSD, yes; illumos, yes; Linux - out of the question.


You're probably getting downvoted because you did not follow up with "...because $reasons", so your post comes off as petty Linux hate (is there such word as "anti-fanboyism"?).


Thanks for the clarification.

In the context, most other posts here contain no technical detail whatsoever either and are nothing more than unsubstantiated opinions (same as mine); they just really dislike that there is someone out there who doesn’t think that Linux is phenomenal. In the days of Microsoft dominance, we called that monoculture.

Volumes have been written and videos filmed on all the inadequacies of the GNU/Linux kernel, far more than I could cram into one “Hacker News” post. I for example get a painful reminder of just how unfit the Linux kernel is as firmware every time I turn on my television set which runs it (ARM V7 Linux for the curious). After that, I don’t want any more. That is not an isolated scenario.

Apropos petty, my hate of GNU/Linux is epic.


Can you actually link to these resources?

Because people have been doing it since the 90s. Google use this on their Chromebook. Some of the people working on Coreboot, Heads, NERF have been doing it for a long time and they all seem to agree with each other.

Also, please tell me why the Linux kernel is so bad, but the BSDs are not? They are not that different and its hard to argue that they are much saver in terms of bugs (just look at the BSD talk at 34C3).

You actually clarified nothing in your post.


https://www.youtube.com/watch?v=l6XQUciI-Sc - watch it to the end.

How long "people have been doing it" has nothing to do with the quality, it's a fallacy. They can't code if they can't even get the basics like polling or startup / shutdown correctly.


People ask for sources about your harsh claims and you share various multiple hour podcasts that "you have to watch it to the end".

Some people may not be able to code but others are not even able to argue.


While both might be true, it does not change the fact that Linux is not a good choice for firmware because the product is simply bad. And asking for links (I did originally write videos and I meant those exact ones) and then complaining about having to watch through them is exactly what’s wrong with our industry. That kind of mentality spills over into code quality or lack thereof. It is high time to start owning up to it and change course.


You also mentioned volumes that have been written. And those would be greatly preferable to podcast or videos, which are some of the worst forms of information sharing, with low density, inability to process it at one's own pace and several other reasons.

Unless you want to discuss something visual where showing things on camera helps greatly with delivering the subject matter, you're much better off writing an article or two. But if the video is mostly about looking at people's faces as they talk and flashy intro animations, then you're just wasting people's time and attention. Again, why not just write an article and include photos of people involved?


Have you ever wondered why you’re always so angry? I looked at your comment history and your bio... sheesh man, please seek out a mental health professional. It’s not normal. I’m gonna take a wild guess that you have had lower back pain for years too. Not trying to be a prick, just a wake up.


Had you had grown up on something so cool and elegant as a Commodore Amiga or an sgi Origin 3800 and IRIX and now you find yourself stuck on a PC bucket server forced by someone else to run Linux, by knowitalls who’ve never known anything else, you’d be pissed off too. People tend to become resentful when they have to regress to something worse and it was not their choice.


So Linux (there is no GNU/Linux kernel, but a GNU/Linux OS) is more popular than how much you'd like it to be, and is that the problem? I for one am glad GPL-licenced free software is running on as much platforms as possible. In those cases Linux is basically a library they pick for their work. You can well say the same thing for glibc, gcc, apache, nginx etc. But in the end-user space nobody is forcing you to use Linux, and that's what monoculture is, not the choices regarding firmware with which the end user is not meant to ever interact.


So Linux (there is no GNU/Linux kernel, but a GNU/Linux OS) is more popular than how much you'd like it to be, and is that the problem?

That's a huge problem for me, because I get stuck dealing with problems solved in traditional UNIX operating systems anywhere from 30 to 20 years ago. It's extremely depressing to have to regress. If the future is Linux, then I want no part of such future.

But in the end-user space nobody is forcing you to use Linux

No? Then why do most companies today force me to work on Linux by insisting on running it? Why am I told in interviews "nah, they don't want to try ZFS or SmartOS... they're Linux people".


I seriously would be interested to learn more about that, can you point me to some resources


Why certainly! This is as good of a place to start as any:

https://www.youtube.com/watch?v=wTVfAMRj-7E


A 4h40m interview on YouTube is considered a source these days. How about written text? Makes it much easier to read and quote.


The delivery medium is irrelevant. And yes, this particular video is a good source, since the person in the video is an authority on kernel engineering; his teams have managed to deliver a fully functional storage appliance, a volume manager/filesystem from the future, infinitely extensible kernel and userspace debuggers, a dynamic tracing framework, a very high performance operating system which for more than two decades was the textbook on large scale symmetric multiprocessing, and a large scale cloud solution which mops the competition in efficiency and design of use. Oh, and a parallel startup/shutdown mechanism as part of a larger self-healing framework. I have a sneaking suspicion that this person might know what he’s talking about after having written and debugged a good portion of that code. I’m not putting in Linux as firmware because I already have it in the products and infrastructure I use and not only is it piss poor slow and inefficient and crashes all of the time, but greenhorns who think they know best keep introducing compatibility-breaking changes. Out of the question, no more. And polishing a piss poor solution is no solution either. Start with a solid foundation, which excludes Linux immediately from the consideration.


Oh, the delivery medium and medium are relevant. Not to you, that much is clear. But we've already established your viewpoint is an outlier. The person you're referring to is Bryan Cantrill [1]. For each expert like him, there are also Linux kernel experts, so I am not buying that one [2], sorry. He's been able to rant successfully in text, as you can read on the Wikipedia page. No idea why we'd have to see extremely long videos of him. I'm sure a shorter, to the point argument can be made by him. So if Cantrill (& friends) could sum it up in 10 min read in a 2018 sauce, that'd be great.

[1] https://en.wikipedia.org/wiki/Bryan_Cantrill

[2] https://en.wikipedia.org/wiki/Argument_from_authority


For each expert like him, there are also Linux kernel experts,

That's an unsubstantiated opinion not backed up by any kind of qualitative evidence.


Literally nobody cares that you don't wanna do that with Linux. That's like your problem. The world needs practical solutions; not *NIX wars or zealotry.


There are practical solutions: FreeBSD, OpenBSD, SmartOS. Linux for firmware is not one of those, but if you think it is, good luck with using it for that. Don't bother to let me know how it worked out for you.


On other hand - a common partition that is shared among all operating systems has too high chance of getting corrupted or simply blown away by another install.

Just yesterday - RHEL7.5 beta install overwrote UEFI entry of Ubuntu for me and Ubuntu became unbootable. On another laptop - Fedora blew away my Windows bootloader from UEFI partition and I have been too lazy to recover it.

Also - because UEFI partition actually contains things needed for booting these operating systems - it is almost always non-trivial to restore.


Right. I think it exchanged problems we know, for problems we don't know. And now we're learning about those problems. Or re-learning.

With respect to NVRAM boot entries, Apple has been doing this for ~30 years and across all of that hardware (three CPU architectures, at least three firmware and three or four filesystems) they have a single method of resetting the NVRAM called "zapping the PRAM" with command+option/apple+p+r at the boot chime. And that's because they know these entries can become corrupt for various reasons. And yet in 2018, what single uniform method exists for clearing stale or corrupt NVRAM information on non-Apple hardware?

Apple also long ago inserted a hint for likely boot device suspects into the HFS volume header, as a fallback for NVRAM in case it was incorrect or had been reset with zapping the PRAM.

With respect to the EFI System partition, Windows and all versions of OS X/macOS do not keep it persistently mounted. That volume is mounted on demand only during updates that require the contents of the volume to be updated, i.e. the bootloader and its files, and is then unmounted. I've always thought this was an incredibly risky as well as lazy thing on all common Linux distros that the EFI system partition is persistently kept rw mounted at /boot/efi.

Somewhat related is when /boot is a separate file system from root, it is also persistently mounted rw on Linux distros. The functional equivalents on Windows and macOS are not persistently mounted. I think it's a rather sloppy practice, but pretty much no one else seems to think it's a problem. You get a crash, and the dirty bits are ignored by read-only firmware and bootloader, so chances are it still boots, and then the dirty bit and any corruptions due to the crash are fixed by fsck during startup. shrug


NVRAM on PC/server main boards is cleared with the RTCRST (Real Time Clock Reset) jumper.

On laptops it’s a combination of keys usually some older ones would require removing the battery for a specific amount of time.


Great, but what magic key combo–or combos?


With Dell it's turn on all the locks (caps, num etc.) and press ALT+F IIRC not sure about every other laptop.


Depends on your laptop and its manufacturer. Check the manual, or, if using aftermarket firmware, the manual for that.


You can have multiple EFI system partitions on the same disk. The Windows installer does not like this, but it won't affect the actual boot process after it's installed. The Linux distros I've seen don't mind at all, since /boot/efi is usually mounted by UUID in your fstab.


No you really can't. There can only be one ESP and that's in the spec. You can have multiple partitions each assigned as /boot for a different OS, but there can only be one ESP which is used by the firmware to store its settings, etc.

The EFI spec [0] is officially silent about the presence of multiple EFI system partitions on non-removable hard drives, but explicitly forbids multiple ESPs on removable disks, per §11.2.1.3:

> For removable media devices there must be only one EFI system partition, and that partition must contain an EFI defined directory in the root directory

But in practice, Bad Things (TM) happen if you have more than one ESP on the boot disk.

An ESP isn't just a partition mounted to /boot or one that has bootfiles or a bootloader, it's (in practice) a FAT32 partition that has a different filesystem ID, in particular, the magic GUID {C12A7328-F81F-11d2-BA4B-00A0C93EC93B}

I'm the author of EasyBCD and numerous other boot utilities and spent a hell of a lot of a time (too much time) researching and working around the various UEFI deficiencies in various desktops, laptops, firmware implementations, bootloaders, and operating systems.

Here's a Microsoft bulletin on the matter: https://support.microsoft.com/en-us/help/2879602/unable-to-b...

(long story short, the Windows kernel may bork if it runs into multiple ESPs and fail to load. Our bootable boot repair CDs wouldn't boot into the WinPE kernel because the disk management subsystem would hit an infinite loop in certain cases (not this case). We ended up switching to FreeBSD for our live CDs and writing our own disk repair subroutines to get around cases where even Windows live CDs wouldn't boot so we could fix the customer's PCs)

Linux may not hang if there are multiple ESPs (except it sometimes does), but it certainly doesn't support treating multiple partitions as the ESP simultaneously, either: https://superuser.com/questions/688617/how-many-efi-system-p...

[0]: https://www.intel.com/content/dam/doc/product-specification/...


> I'm the author of EasyBCD

Thank you!

Also, maybe you can answer this: what's the 'correct' way for systems to handle multiple disks with an ESP partition each? In other words, if I'm using software RAID and clone the partition tables, should I assume that the UEFI firmware will default to the first disk's ESP, or is that set when boot configurations are updated?


> Thank you!

No problem! Glad you found it useful.

> if I'm using software RAID and clone the partition tables, should I assume that the UEFI firmware will default to the first disk's ESP

With regards to your question: I guess it depends on what you mean by "software" raid. Enabling RAID in the "BIOS" (that term has now come to encompass the firmware configuration utility for UEFI motherboards as well) (typically Intel "RST" rapid storage RAID, nvraid for nVidia chipsets (less common these days), AMD raid, etc.) is 100% software raid, which does nothing more than make the BIOS aware of its presence (and toggle a flag so that RAID drivers can find the array) so that it doesn't run into exactly the problem you describe, i.e. it'll be aware of the cloning at a level higher than the UEFI bootloader init, which will only see the virtual raid volumes rather than their independent components.

Now if you put all that aside and are just wondering how it selects in the presence of multiple ESPs on different devices that made it through to the bootloader layer: the UEFI "BIOS" doesn't know (and can't know) since UEFI has done away with the concept of drive order entirely. You can set the order manually in the "boot device order" section (typically on the boot tab in the BIOS config) or choose the one-time boot target via F11 or F12 at the boot-up screen, but otherwise it's just a tossup and the BIOS is free to chose any local device's ESP as "the" ESP. In practice, it'll choose one of the SATA drives (assuming a mix of other interfaces such as NVMe), either the first to respond (not an issue with SSDs which respond quickly, but magnetic media takes time to spin up if you're still using old HDDs) or the first to be enumerated by the SATA controller (which has its _own_ firmware and its own level of abstraction and logic (and bugs) and whatnot).

BUT in the case of software raid, that RAID 0 should actually extend to the GPT itself and both drives will have the same content and mirrored ESPs, so in practice it won't matter which is loaded (so long as your OS doesn't do anything stupid like try to modify the on-disk structure before loading the RAID-aware storage driver. I've seen BIOSes that "helpfully" store a copy of the current firmware to a locally-attached disk with an understandable partition format (so FAT32) that ended up breaking a RAID because the drives were no longer in sync because the BIOS wasn't aware of the mirror).


From section §12.2.3.3 of the UEFI 2.0 spec:

> Hard drives may contain multiple partitions as defined in Section 12.2.2 on partition discovery. Any partition on the hard drive may contain a file system that the EFI firmware recognizes. Images that are to be booted must be stored under the EFI subdirectory as defined in Sections 12.2.1 and 12.2.2.

It definitely sounds like UEFI mandates support of it for internal drives (though I'm sure many implementations don't conform).

Empirically I've had this work with two ESPs (one Windows 10, installed first and one Linux, installed after). I've also done this with two Linux distributions with even less fanfare. I don't know if I just got lucky that my firmware supported it and I didn't have an Windows update that tried to mess with the loader or not.

I understand the difference between an ext2/3/4 /boot and FAT ESPs, thanks.


> It definitely sounds like UEFI mandates support of it for internal drive

I can see why you would interpret it that way, but the section you quoted doesn't say that.

The spec for the ESP is based off of FAT32, which means that in practice, all firmwares can read any FAT32 partition. While the ESP must be FAT32 (with the different filesystem GUID), any random FAT32 partition is not an ESP.

I really don't mean to overexplain this or belabor the point (forgive me if it comes across that way, but perhaps bear with me, too), but a FAT32 partition may be a boot partition without being the ESP (just as an ext4fs partition might be).

The parts of the spec you quoted are not referring explicitly to an ESP, but just "any recognized partition" which also includes non-ESP FAT32 partitions. In particular,

> Hard drives may contain multiple partitions as defined in Section 12.2.2 on partition discovery.

I just double-checked and §12.2.2 refers to generic discovery of all partitions, not just the ESP ("This specification requires the firmware to be able to parse the legacy master boot record(MBR) (see Section 5.2.1), GUID Partition Table (GPT)(see Section 5.3.2), and El Torito (see Section 12.2.2.1) logical device volumes.")

> Any partition on the hard drive may contain a file system that the EFI firmware recognizes

Already addressed, but also includes support for other filesystems so that a, say, consumer electronic with a custom filesystem load its OS from said device, is not out of spec.


Thank you, you literally saved my life when I tried to get my Windows Vista and Linux partitions working again. Some Windows Vista update or something corrupted the boot loader or something and your software fixed it.


Hey Rick, thanks for taking the time to leave that comment. It means a considerable amount; helping people out is the reason why EasyBCD is free (and with over 50 million downloads to date!)


From my reading of the UEFI spec, much of the language assumes one EFI system partition per device, for example see the capsule updating language on page 262 of the version 2.4 spec.

The directory \EFI\UpdateCapsule is checked for capsules only withing the EFI system partition on the device specified in the active boot option determine by reference to BootNext variable or BootOrder variable processing.

and later

The system firmware is not required to check mass storage devices that do not contain boot target that is highest priority for boot nor to check a second EFI system partition not the target of the active boot variable.

There's no accounting for two ESP's for one device, and how to resolve the ensuing ambiguity, which one is the primary one? Are you certain Windows updates distinguish between two EFI system partitions, should an update need to update the Windows bootloader or its config?


Actually, it's even worse than that. As the BIOS no longer has a concept of "active device," multiple ESPs on separate devices are also a problem; the user would have to actively pick the one to load at boot time via the non-EFI "boot device menu" at the boot screen.


I always thought it a design flaw in OSes that always keep EFI partitions mounted (and writeable).

I also read horror stories where someone bricks their pc/laptop by corrupting the UEFI (not the EFI partition, the actual firmware).


Why are you sharing a UEFI partition between your OSes? Pretty sure you don't have to?


A big one is GPU passthrough. Before graphics cards supported UEFI the old VGA BIOS was a nightmare to get working with virtualization. Now with a tiny bit of configuration you can get consumer GPUs to work just fine in virtual machines.


This was mostly intentional, tbh. Both AMD and nVidia GPUs had "officially sanctioned" cards that worked just fine with their legacy VGA ROM.


Maybe for AMD GPUs, but getting consumer Nvidia GPUs working with VT-d on Nvidia's drivers is a little difficult. GPU passthrough is not something Nvidia wants to support on consumer GPUs.

Edit: s/very/a little/


I've had no problem using consumer Nvidia GPUs with QEMU/KVM for this. All I needed to do was (1) use kvm=off (this does not turn off KVM, it just sort of hides it from the NVidia driver) and (2) on Windows, edit the registry to force enable Message Signaled Interrupts. When I tried it with an AMD card it would periodically crash the host when it was assigned to a Linux guest.


It's "very difficult" because the drivers are intentionally nerfed not to load in a virtual environment. Your statement is entirely correct.


One line config change to hide the hypervisor from the VM and it works. (Source: worked for me)


> I can now have a normal partition to put my boot loaders in rather than a hidden chunk at the beginning of the disk.

you can make a /boot with BIOS too. format it with FAT and nowadays all bootloaders can access it. in practice, I'm pretty sure GRUB can boot everything but Windows and Mac anyways, so it doesn't really matter.

> I can easily update, add and remove boot entries from the OS command line.

again, grub handles this. in fact, it handles it much better than some firmwares, which usually have bugs relating to, among others, random rearrangement of boot order, failure to persist changes, and permanently adding entries for every flash drive you plug in. these have never happened to me with grub, and I do not expect them to. OTOH, I expect to see stupid bugs caused by firmware vendors de facto supporting single-drive Windows only, at least until the end of the PC platform as we know it.

> I can forgo bootloaders entirely and use Linux as UEFI application.

I guess this is nice, but honestly three binaries wasn't significantly worse than two.

> I can use GPT and finally partition as much as I want.

you can use GPT with BIOS too, it's just that MS doesn't feel like testing it so they lock it out for everybody.


/boot is a partition that your boot loader needs to know about ahead of time, and I still have to point my BIOS towards a disk that has the correct code embedded in a section at the beginning of the disk in order to execute it to read its config to know about /boot.

With UEFI, the pre-boot system can automatically detect OSes that are installed on any disks that exist, and can let you choose which ones to run before executing any other code.

This eliminates the issues of e.g. installing Windows (with its boot loader) and then Linux (with its replacement boot loader) overwriting that and then having to add an entry to boot the old OS, and so on. Now all OSes and boot loaders are accessible from the same level and OS installers don't have to worry about wiping out the other ones unless they're deleting partitions.

> again, grub handles this. in fact, it handles it much better than some firmwares

I have had significant issues with Grub when, for example, migrating from one disk to another, having to swap boot drives, change kernel boot parameters, run chroots, and so on, all to make sure that Grub puts the right code at the start of the right disk to point to the right partition ID to read the right config file to load the right kernel.

Nothing is perfect and foolproof, and Grub on UEFI is vastly better than Grub on BIOS/MBR/etc. in my experience.


> With UEFI, the pre-boot system can automatically detect OSes that are installed on any disks that exist, and can let you choose which ones to run before executing any other code.

the boot entries are stored in the firmware, not on the disk. therefore, any new disks can only boot the default binary (\EFI\BOOT\BOOTX64.EFI usually). so, if you regularly get new drives, you need to install a boot manager anyways, which is the same as the BIOS experience in the end, except with more chances for the shitty shitty vendor firmware to fuck something up.

> This eliminates the issues of e.g. installing Windows (with its boot loader) and then Linux (with its replacement boot loader) overwriting that and then having to add an entry to boot the old OS, and so on. Now all OSes and boot loaders are accessible from the same level and OS installers don't have to worry about wiping out the other ones unless they're deleting partitions.

1. grub 2 auto-detects Windows by default, and other boot loaders required you to manually write entries anyways; if you forget to add the Windows entry while you're in there anyways, that's your own problem (and no big loss anyways).

2. as I said, the firmware boot entry manager is often if not usually total shit. even better, Windows (or probably some poorly-written vendor drivers that assume you only have Windows) has been reported to fiddle with boot entries even after installation! at least in the BIOS system, I set it up and it worked. now I have to reconfigure it every time I boot into Windows?

> I have had significant issues with Grub when, for example, migrating from one disk to another, having to swap boot drives, change kernel boot parameters, run chroots, and so on, all to make sure that Grub puts the right code at the start of the right disk to point to the right partition ID to read the right config file to load the right kernel.

If you configure your /etc/fstab correctly, none of these steps are necessary. literally no configuration is necessary if you just use the "dd" command. if you use "cp", obviously you will need to reinstall the boot loader. this is the only thing that the ESP does better (as long as you remember to copy the ESP contents and not just the main filesystem, but that's not too hard).


>I can use GPT and finally partition as much as I want.

BIOS actually don't care what is on disk except for 512 bytes containing MBR, so it is possible to use GPT partitions with it.


Also GPT partitions have little use when using mdraid/dm-crypt/lvm2 (or zfs) stack. IIRC Windows have similar features with dynamic partitions, so I don't see any reason to use bios partitioning for anything beyond boot (and MBR is enough for it, even without extended partitions).


GRUB on GPT+BIOS requires the use of a BIOS Boot Partition too (type code ef02 in gdisk... see the full UUID for the full story). So one of the points is moot as well.


More specifically, BIOS machines don't know what partitions are (nor partition tables), and it's only the first 446-ish bytes, not 512.


I (see footnote) started off hating the UEFI because it changed everything for me and others working with bootloaders, but at this point I'm just ambivalent about it. It trades one set of problems for another. The biggest problem is with shoddy implementations and poor development/engineering practices by hardware and software vendors (I have some devices lying around from R&D that brick themselves if the ESP is erased; you can't even access the "BIOS" even if you remove the CMOS battery, can't boot from a removable disk, nada. Need to insert a physical, non-removable disk with an ESP compatible with that PC or mail it back in to the vendor for an RMA!)

That said, your argument really boils down to a straw man. None of your points are really in favor of UEFI and some show a misunderstanding of the previous situation. I really recommend anyone that wants to better understand the (traditional) boot process have a read of this guide we wrote, complete with flowcharts and breakdowns, entitled "Everything you ever wanted to know about how your PC starts up (but were too afraid to ask)": https://neosmart.net/blog/2015/everything-you-ever-wanted-to...

Anyway, to address your remarks:

1) I can now have a normal partition to put my boot loaders in rather than a hidden chunk at the beginning of the disk.

That has always been the case, at least any time in contemporary PC history. Ever since bootloaders grew past the 1 sector limit, the only code at the start of the drive has been a pointer (that can even be copied and pasted between Windows and Linux, etc. because it serves a universal purpose) that looks for and loads the first active, primary partition on the drive. That is where the actual bootloader code resides.

2) I can easily update, add and remove boot entries from the OS command line.

This isn't anything to do with UEFI either, it's just a matter of your OS providing an interface for doing so. Microsoft did that back in 2006 for legacy bootloaders with bcdedit (that's where our free EasyBCD tool comes in) and we've developed standalone cli utilities for doing the same for GRUB under Linux.

3) I can forgo bootloaders entirely and use Linux as UEFI application.

Sure. But that bootloader code has been integrated into the OS and the firmware. Really it's just reduced the abstraction and increased blurring between the layers. Makes it hard to swap out components.

4) I can use GPT and finally partition as much as I want.

GPT is independent of UEFI. Many operating systems support the use of GPT partitioning in a BIOS environment (see Windows and FreeBSD, until recently, FreeBSD defaulted to doing so. Not sure if it still does, would have to spin up a VM and check against FreeBSD 12-CURRENT). GPT is a partition management scheme, UEFI is a bootloader scheme that requires GPT (interestingly, while it requires GPT to be understood, it does not require GPT to be necessarily used and can actually function with MBR drives, though in practice this does not happen. Mostly.). In addition, the GPT spec is purposely backwards compatible with BIOS by reserving the first 512 bytes of the disk (where the 446 bytes of MBR bootloader code followed by the 66 bytes of partition descriptors would be located), which can be used to point the traditional "BIOS bootloader" to a GPT partition to serve as a shoe-in.

(I'm the developer of EasyBCD [0] and numerous other boot utilities that deal heavily with EFI/GPT dark magic [1])

[0]: https://neosmart.net/EasyBCD/ [1]: https://NeoSmart.net/EasyRE/


I learned more from this comment than I did from 6 months of trial and error installation/provisioning/partitioning of various lab computers (macs, hackintoshes, mac/hack + windows, windows, and BSD OSes). Thanks for all your hard work!


You are most welcome. Thank you for taking the time to leave that comment, it actually means a lot. It often seems as if certain content or research that takes months or years is met with nothing more than a 'meh' when published if it isn't applicable to the current fad-of-the-day :)

If you found that comment insightful, you really should read this: https://neosmart.net/wiki/mbr-boot-process/

Cheers!


There is virtually no information on that page. I don't understand why this is getting voted so highly. I'm interested (this falls right in my area of expertise, see my other comments on this page) and I've heard about this proposal before, but the landing page this story links to is just a placeholder with no useful (technical or marketing) information.


This is probably because people saw the LinuxCon video and are excited that it's out.

At least that's my case :)


Now I got something that I need to checkout to understand :)

Thanks for that "seemingly useless" comment


Thanks, that explains a lot!


Do you have a link to the talk?


The front page of LinuxBoot has links to talks' videos and slides.


I do like the coreboot + lightweight payload (such as u-boot or grub) approach, and simply can't see any benefit of jamming a Linux kernel in; Simplicity is itself value.

Even if I were to find their approach appealing (I don't), Linux's monolithic design with no driver APIs doesn't even seem like a good fit for this; NetBSD's kernel size (and its cleaner design, and the RUMP kernels feature), Minix3 (high assurance, unlike Linux) or even Genode are far better suited.

But, seriously, we need less, simpler, cleaner code, and this "jam Linux into everything" approach seems nothing less than the old "when you have a hammer, everything looks like a nail" problem.


Using Linux is simpler. Grub needs to replicated a ton of stuff that already is in Linux and it has worse more insecure and slower implementation.

Once you have a real Linux you can use all the battle tested code. You can easily and in good environment implement all the features you want in terms of boot verification, boot security, authentication, attestation and so on.

Using Grub and pushing ever more features into it is not simpler, and its not more secure.

Using a well tested Linux software that has long supported these features with a set of standard open source implementation for all the features you need.


On servers, you're already booting a Linux kernel. IMO, it's simpler to boot an older version of Linux that ultimately shares almost all the relevant code with your production version of Linux than to have two entirely separate codebases.

For example, if you fix a bug in an upstream Linux driver needed at boot time, then both your production system and your bootloader will automatically get it (once you rebuild and reflash). With the traditional UEFI setup, you have two separate codebases, each with their own sets of bugs. I don't see how that is simpler.


>For example, if you fix a bug in an upstream Linux driver needed at boot time

Or, for example, if there's a bug in a Linux driver, neither linux nor the bootloader (which is also linux) will work.

>With the traditional UEFI setup, you have two separate codebases, each with their own sets of bugs.

Except Linux is a multi-megabyte-clusterfuck, and the bootloader likely is simple and easy to understand/debug, and will keep working as long as the hardware it needs to boot remains the same, which is usually the case.


This "LinuxBoot" is trying to replace most (or all) of UEFI on systems it can support, not just grub. UEFI is also multi-megabyte, and arguably more of a "clusterfuck", forked from intel's UEFI upstream some years ago and sloppily adapted by motherboard vendors.


> NetBSD's kernel size (and its cleaner design,

Don't make me laugh.


Most of us have not read the NetBSD kernel source code (I have not.) I've read a small bit of the Linux source code. Please inform me of why I should laugh at this claim.


You've obviously never read any netbsd code. Fortunately, I have, and I can tell you: Cleaner design.


If we use LinuxBoot to boot Linux (desktop or server), then why don't we boot direct into Linux (desktop or server)?


I think it's because the first Linux image has to be written in the firmware ROM, so this make upgrades difficult because you have to flash a new firmware on every kernel update.

So it's more practical to have a stripped down Linux kernel that loads a complete kernel from disk.


But if the primary purpose of updating your kernel is security updates (which is the case for some people), shouldn't you be making the effort to do all that flashing? (honest question) I realize the attack surface would be much smaller since you'd be running a lot less services.


Yes, there should be firmware updates for the kernel in flash. (And we haven't really figured out how to handle that yet.)

But such firmware updates likely have different constraints and a different schedule than your kernel-in-production updates. People will likely have much different requirements for the kernel in flash than for the kernel in production. It may be that for your use case and threat model, they'd be the same - but that's up to you to figure out and decide.


The LinuxBoot kernel is designed to check downstream signed kernel images before loading. So it won’t load anything that isn’t signed.

I suggest watching the original Heads talk to understand the context and goal better.


Maybe. That kernel is only running for a short time before it is replaced by the main OS kernel.

In a lot of scenarios it won't connect to the network and therefore will be difficult to exploit. If it does connect to the network (for network boot) then the security profile is different (how do you authenticate the image it downloads) but it probably won't be running many services of its own that could be exploited.

To exploit it locally it would have to run user supplied commands and binaries and hopefully it would be possible to make it difficult to inject these in an adhoc way.

The kernel that your main OS runs does so for a much longer time and with many more processes and services. The security vulnerabilities are exploitable when these different processes run and interact and when these processes are doing things in different security domains.

The scope of what the main OS is doing is much larger and much less pre-determined and therefore easier to exploit.


The LinuxBoot kernel establishes a hardware root of trust with the TPM and measures the ROM before bringing up any IO devices, so when it connects to the network it is able to perform a remote attestation as to its state. This way the hosting provider or customer can decide to not provision a node that has somehow been modified by a prior tenant.

For a network booting scenario the LinuxBoot server can use GPG to validate the signature on the kernel that it receives over the network. Additionally, secrets can be sealed in the TPM and only unlocked if the received kernel matches the expected one (and if the local firmware is unmodified).


There's probably an argument to be made for a lean bootloader/kernel that does minimal work - and for a os kernel that's more general. But with network boot (ethernet, WiFi stack, network stack, one or more network transports ((t)ftp, nfs?..) and support for encryption... The line does blur.

Might be worth it to have minimal Linux "profile" that support "booting" other Linux via kexec.

But then, will you boot nt, freedos, bsd and minix via kexec too?


> There's probably an argument to be made for a lean bootloader/kernel that does minimal work - and for a os kernel that's more general.

We had that in Linux; it was called LILO (Linux loader), consisting of a 512 byte machine language program run from the boot sector which would load a canned sequence of more sectors, a slightly larger program, which would then load a kernel image from a sector map.

Before that, generations of machines going back to the dawn of computing had minimal bootstrap sequences.

But, on the flipside, idea of more capable boot firmware precedes Linux by probably ten years if not more, like on Sun workstations and whatnot.


Booting via kexec is the intended idea here. Theoretically, it should be possible to boot any other OS with kexec. We may have to test that assumption for this project...



It's Linux. Linux all the way down.


Linux is written in C, not Lisp. So no, not really. Unfortunately.


Flexibility. So can update the OS easily. To have the choice of running Windows or BSD, etc.


wait what? using a whole linux kernel as firmware before booting another OS to improve speed, reliability, security? does this seem strange to anyone else?


Check Ron Minnich's talk explaining the why:

https://schd.ws/hosted_files/osseu17/84/Replace%20UEFI%20wit...

(The video of the presentation is linked in the OP).

There's already two and a half obscure OS's running underneath the OS for booting. So this replaces all that crap with something lean and good.


Firmware with a built in web server is pretty spooky. Why do they have that?


Most UEFI implementations have a network stack these days. It's horrible.


I agree, but it's not exactly like PXE didn't exist before EFI was invented, and AIUI EFI actually decided to reuse PXE instead of NIHing another overengineered non-solution, so honestly that's not even so bad.


Don't forget OpenSSL. Imagine how long SSLv3 or export ciphers would last if this was done in the 1990s


To administer a fleet of computers remotely. HTTP is a widely-implemented and tested protocol that fits the problem domain well.


As opposed to something like SNMP that was designed for remote management.



> So this replaces all that crap with something lean and good.

I think the concern is about "lean". I don't think there's a lot of code yet, but I do see it will use http://u-root.tk/.


LinuxBoot is trying to be agnostic to the initramfs. We don't want to prescribe what tools you use to boot your system.

When there is a build system, you'll be able to choose whatever initramfs you want to implement your boot policies. The two choices we have today are NERF (= LinuxBoot + u-root as initramfs, as you linked) and Heads (LinuxBoot with Heads kernel and runtime, see http://osresearch.net).


> DigitalTermometerSensor (sic)

Heh, that gave me a chuckle

> Userland written in Go (http://u-root.tk)

So much for Go not being a systems programming language.


> So much for Go not being a systems programming language.

There are other examples already.

Fuchsia TCP/IP stack and file system driver management utilities are implemented in Go.

https://groups.google.com/d/msg/golang-dev/2xuYHcP0Fdc/tKb1P...

Android's new GPU debugger is written in Go.

https://github.com/google/gapid

An then there are some bare-metal experiments like G.E.R.T.

https://github.com/ycoroneos/G.E.R.T


I knew about G.E.R.T. but the other two are new to me, very nice!

(I was going for a tongue-in-cheek ribbing of the Go-haters - in retrospect my wording sounds a bit passive-aggressive, probably should have ended with an exclamation mark and a smily or something)


I do share a love-hate relation to Go.

I love it is based on a mix of Oberon and Limbo, compiled to native code and an improvement for what a safe C like language should probably look like, and who knows maybe someone decides a day to do a "Goberon" OS for they OS PhD research.

Basically what Java and C# 1.0 should have been all along, about 20 years ago.

However I also dislike some of the design decisions regarding features that will never come to Go, regardless of how much we get to discuss them, even with the mirage of Go 2.0.


Its kind if weird, they compile at runtime as Go is a bit bloated. But userspace is a good fit for Go. GC timing is not an issue, it has good libraries. Much better choice than C for almost all userspace bringup.


We also have a mode that rewrites all commands' sources given so it can be compiled into one binary. It ahem needs some improvements, though :)


Well, it depends on your definition of "whole Linux". By it's nature, it can be stripped down fairly lean, especially if you know precisely which drivers you need, and don't need a lot of user-space support libraries, etc.

It is possible for *nix to be very minimal, less so than in the old days, of course. But speaking of the old days, when I earned my bread as a CPU logic designer, I recall one grizzled OS developer's reaction to the looooong errata list for the A0 stepping of some new silicon: "Pffft. I can bring up Unix on a broken washing machine."


Similar philosophy to Petitboot, for which the rationale is given here: https://www.youtube.com/watch?v=CQueOHKO58M

Obviously it's a heavy weight solution, but the idea of having more drivers and better support is compelling— it's a much easier path to exotic boot setups, eg over wifi, off a SAN, whatever.


> Obviously it's a heavy weight solution, but the idea of having more drivers and better support is compelling— it's a much easier path to exotic boot setups, eg over wifi, off a SAN, whatever.

So...this is strictly my own personal experience, but, I've always had better compatibility with my hardware in the BIOS than I have had in any OS. I can't count the number of times my K/V/M has worked in the BIOS, then suddenly stopped working once DOS/Windows/Linux started booting.


UEFI is a heavy-weight solution in comparison as well. You can compile some pretty small Linux kernels.


ehh… Sure, TianoCore extended by AMI/whoever is not that tiny, but it's still not a full multitasking multiuser kernel.

But EFI can be implemented by others. Including U-boot! Which is really lightweight.


why boot linux then boot linux again, rather than just booting linux once?


The current way an x86 platform is booted to linux today uses mostly closed source UEFI DXE drivers, so the current boot flow is really (simplified) like this:

UEFI DXE loads disk/network -> Linux

We're doing Linux (with commonly used open source drivers)-> Linux

It's possible that if your end goal is a Linux system without too many requirements, you could stop at the first Linux system without ever booting the second. However the bios rom memory is pretty small, and we're mostly working with about 4-8mb of free space left not including the ME region or UEFI PEI.


> you could stop at the first Linux system without ever booting the second

Someone did that! Well, something like it, anyway.I have an Asus EeeBox sitting on my desk, gathering dust, that has something called "SplashTop" - if I hit the right key during the POST screen, it boots into a minimalistic Linux desktop in about 2 seconds.

There is networking and a browser. I haven't used it in ages, but I think the problem was that it could not access the file system, so one could not do very much, and I think settings did not persist across reboots. But the general idea seemed appealing at the time.


Chances are you don't want to re-flash your firmware on your servers just to change what kernel you run in production. The kernel in flash also has space constraints that will make it a pretty small kernel with a lot of features turned off.


In fact, you'd have to reflash to update Firefox! Oh wait, Firefox won't even fit on the ROM chip. No wonder we don't use the ROM chip as our main storage...


You could have a tiny initramfs that mounts the disk you want with your actual init and then switch_root to that. You'd have the kernel from flash, and init from disk.


Tannenbaum is spinning in his chair.


So you have choice of running Windows or BSD, etc.


bootstrapping


Correct me if I'm wrong, but you still need a coreboot-compatible motherboard ?


Actually no. We've demonstrated this on boards without using coreboot support such as OCP's Winterfell machine. The reason is that we're inserting the Linux kernel in UEFI's DXE stage, whereas cpu init such as memory training are still handled by UEFI's PEI stage. From the perspective of UEFI, the Linux kernel is just another DXE to execute.


Is there a document that does a decent job of explaining how UEFI works?


You could try this: https://github.com/tianocore/tianocore.github.io/wiki/UEFI-E...

I don't know of any easily digestible document though.


Thanks!


No, basically they're taking a vendor-provided UEFI image and stripping out the 90% that they don't want and replacing it with Linux. This is for people who can't afford to port coreboot to their motherboard.


You do not. Take a look at Ron's talk here:

https://schd.ws/hosted_files/osseu17/84/Replace%20UEFI%20wit...


I remember my Commodore 64 days where you turned on the C64 and it booted right into BASIC because it loaded BASIC from a ROM, or it loaded software from a cartridge. The 1541 floppy drive was extra, or the tape or modem etc.

The thing I liked most about the C64 was booting into BASIC as soon as you turned it on.

If Linux becomes the Firmware or installed on Firmware on an EEPROM or whatever, it can boot into a LiveCD Distro of Linux from an EEPROM faster than from a hard drive or USB drive or DVD whatever.

I have to say some computers are getting rid of DVD/CD/BluRay drives and booting from USB Drives, which makes download a Linux Live CD/DVD and test it out, now it can just boot from BIOS with the bare minimum LiveCD distro, and allow you to have tools to work on hard drives and fix them etc.

If I ran Commodore, and brought it back from the dead, I'd make Intel/AMD PCs, and I'd also make Commodore Bridge Cards that run a 68M/PPC Amiga on a PCIe/PCI card that can run in the host OS as a Window or Full Screen and can make virtual hard drives and floppies out of files, etc. Maybe a C64/C128/C65 on a card with ports from the Commodore series to hook up real 1541 drives etc. The Commodore Colt PCs will have a VIC-20 series that is low end, and a Mad Max series that is high end and built for video games and SteamOS or run Windows. They will come with Debian Linux but can be reformatted for Windows or SteamOS and make a deal with Steam to run C64 games in emulators the same way Sega Genesis games are run in emulators.

I'd also try to make ARM-based PCs running Linux for under $100 based loosely on the Raspberry PI but assembled. Contribute to the Raspian or whatever it uses with an emulator app for 8-bit Commodores and they can buy ROMs from a store in the emulators to download and run, after getting permission from the software companies if they still exist to license them and pay 10% or whatever the fee is to the customer.


People have tried to reboot the Commodore IP only to make some Linux version with all the emulators on it running on a PC in a keyboard case that resembles a C64 or Amiga 500. I think yours being a box that has expansion slots would do a lot better. They could not figure out how to monetize the ROMs as TOSEC on Archive.org and other places has virtually every ROM, Tape, and Floppy ever made for the C64.

I like that idea of selling C64 games on Steam the same way they sell Sega Genesis games. Just port VICE to use Steam and verify codes for purchases for different ROMs to load after being paid for.


Is there any chance of a big player like Dell getting behind this project? Will servers ever ship with this firmware?


Horizon Computing will ship OCP nodes running LinuxBoot / NERF in Q2.


I don't understand what's wrong with Grub. UEFI loads Grub which loads your OS. The previous "OS" gets replaced in each iteration (except for the Intel ME stuff .. which is off running somewhere in its own little world).

Grub supports raid, lvm2 and luks. I literally have a 1 partition with LUKS that contains an LVM with a swap and root (/) volume. Yes, that means I have an encrypted boot. The only thing unencrypted is my EFI/SPI partition.


UEFI has more privileges than your OS, just like ME has more privileges than UEFI. How do we know it hasn't messed with the environment in ways we can't see? How do we know our UEFI firmware wasn't compromised? I'd argue it's very hard to tell.


Boot firmware by any other name is still boot firmware, and the same forces that have made EFI what it is will affect any other thing you try to put in its place.


> Typically makes boot 20 times faster

Really? That means my current Fedora boot on an T430 with SSD of 20s would go down to 1s? Seems unbelievable.


We should clarify that. Our example is an OCP Winterfell node, where boot time went from 8 minutes to 20 seconds.


That's an edge case. Asserting that boot is typically 20 times faster is misleading.


It's pretty typical boot time for most UEFI server platforms with network cards and RAID controllers. The Dell R630 takes about six minutes, the HP DL3x0 is about eight and the Lenovo x3550 takes over ten. Of the ones I've tested only the Intel s2600wf is under two minutes in a stock configuration, but with LinuxBoot it is less than twenty seconds to a Unix shell despite the PEI and DXE serial debugging messages: https://www.youtube.com/watch?v=0HISDFXZvSI


Out of curiousity, what devices do you have that it takes 8 minutes for the DXE to initialize?


if i look at the diagram it looks like linux replaces a bootloader? or is that a bit too simple of a conclusion?

wouldn't a minimalistic bootloader loading linux then not do about the same? (if you implement coreboot) since that leaves linux to configure everything something like GRUB would usually initialise? I think linux actually re-initialises most things that are done by the loader, and seeing that linux kernel is in the 'memory init' zone, i'd say it's not 'in firmware' and so wouldn't change much??

in my eyes, bios confiugres some stuff like smm etc., then hands over to bootloader (grub or so) which further initialises system ,and hands over to OS. (linux?) So this just seems to skip the bootloader? of which most initialisation is ignored / reinitialised by linux anyway?


[flagged]


Creepy? How so?


>The Linux Foundation

I guess that explains why they're trying to jam Linux into the firmware.


I don't. Should I? Why?


It's now hosting this project, and seems to have created this page and reformed the project a bit. And have you seen the foundation's website?

They have a tiered sponsor page to act as badge of honor for the tech companies with the most money. Their board of directors are 22 of some of the most powerful tech companies in the world. They "host" an amalgam of different tech products (actually working in legal entities to control the projects, it seems, under their LLC). They sponsor conferences, seemingly just to decide the future of the industry with a bunch of connected higher-ups and project leads. They don't appear to have a lot of transparency in terms of where their money is going, what guides the organization, why it exists, what it is doing, etc.

The most annoying thing about it to me is how a bunch of open source projects are actually just product placement for corporations, with press blurbs regurgitating bullshit business speak about something like logging formats.

It used to be that we used the best tool for the job. Now we use whatever tool has the most buzzwords and corporate sponsorship. This foundation appears to be spookily pushing that agenda, under the badge of Linux, for whatever reason. It's creepy.


You haven't a clue what you're talking about.

The Linux Foundation is one way in which corporations subsidize key open source projects that benefit them all. There's nothing creepy at all about corporate sponsorship of key open source technologies. The Linux Foundation started with Linux and has expanded from there, so what? None of their projects http://www.linuxfoundation.org/projects/ are in any way sinister. You do realise that it is possible to work for a corporation and and also be into open source and also not sell your soul to the devil. What hyperbole, give us a break.


Are you aware that the Linux Foundation employs Linus Torvalds? And that a very vast majority of Linux (and most other) open source development is done by developers who are paid employees of major corporations like Red Hat, IBM, Oracle, etc? Some people seem to have the idea that it’s developed by hippie volunteers living in a commune, but the fact is, open source development is done by interested parties who are typically employees of corporations who stand to benefit from their work. And that’s okay.

As a donor to the Linux Foundation, I don’t find it creepy. Same goes for the Cloud Native Computing Foundation.


They didn't create this page. You can follow the development of the page at https://github.com/linuxboot/linuxboot.org


Best case: the linux foundation is, despite corporate influence, genuinely trying to advance the state of the art for its own sake.

Worst case: this is how open source software development has always been, caveat a few "genesis innovations" several decades ago that kicked off the current business/buzzword-backed hype-fest.

Either way, it's no use complaining.


MS-DOS on BIOS Linux on MS-DOS (UEFI) ??? on Linux (this)

Great


Wonderful! I detest the UEFI/secure boot schema in use on post-Windows 8 systems


This doesn't really have anything to do with Windows, recent Linux and FreeBSD release prefer to use UEFI and Secure Boot if available.

While UEFI is platform/OS-agnostic, it's too early to tell if this LinuxBoot will be as open (even if better designed). Additionally, there are benefits that Secure Boot brings to the table (notable with regards to security and integrity verification) that you should not be so hasty to throw out along with the bathwater. If LinuxBoot does not support Secure Boot, I can guarantee you it will be a project dead in the water, no one (in the corporate world) will want to go back to untrusted boot.


The threat model that LinuxBoot is addressing is different form SecureBoot and it can use the well known, normal Linux tools rather than the unconventional UEFI ones.

The Heads runtime can do things like use TPMTOTP to attest to the user that the firmware is unaltered and includes gnupg to verify the kernel and root filesystem signatures with the user's own key.

For cloud systems a LinuxBoot runtime can use the TPM or other trusted hardware to remotely attest to the client's own provisioning server that the configuration is unchanged, and since it is reproducibly built, that the firmware is what the user expects. This is significantly more trustworthy than the binary blobs and non-reproducible UEFI firmware on most servers.


No, and I’m sorry that it came to this. The article presumes that the quality of the Linux kernel is indisputable, and it simply isn’t.


Now systemd will be in the boot loader, too, so finally we'll be able to have perfect power management in Linux, once it controls everything first-hand :-)


You can put whatever in the initramfs, but since Linux just boots as a UEFI executable with CONFIG_EFI_STUB=y, it would be somewhat silly to start up a full userland via systemd in the bootloader.

systemd-boot is a thing though, which is a pretty small UEFI application that reads a Freedesktop Boot Loader Specification config [1] from an EFI partition and tells UEFI to boot a kernel directly. Definitely no Linux or systemd-init involved though.

[1]: https://www.freedesktop.org/wiki/Specifications/BootLoaderSp...


(formely known as gummiboot)


If you want to use systemd with LinuxBoot, go for it, but LinuxBoot as it is doesn't prescribe an initramfs or a runtime. You can use whatever you want. :)


I was kidding, it was just "systemd overlords"-irony :-)


I know, just making sure for other readers :) We're trying not to make the wrong impression. Besides - the goal of the project is to enable you to have more control over your firmware - if you want systemd in it, we might advise against it, but it's all up to you.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: