On servers, you're already booting a Linux kernel. IMO, it's simpler to boot an older version of Linux that ultimately shares almost all the relevant code with your production version of Linux than to have two entirely separate codebases.
For example, if you fix a bug in an upstream Linux driver needed at boot time, then both your production system and your bootloader will automatically get it (once you rebuild and reflash). With the traditional UEFI setup, you have two separate codebases, each with their own sets of bugs. I don't see how that is simpler.
>For example, if you fix a bug in an upstream Linux driver needed at boot time
Or, for example, if there's a bug in a Linux driver, neither linux nor the bootloader (which is also linux) will work.
>With the traditional UEFI setup, you have two separate codebases, each with their own sets of bugs.
Except Linux is a multi-megabyte-clusterfuck, and the bootloader likely is simple and easy to understand/debug, and will keep working as long as the hardware it needs to boot remains the same, which is usually the case.
This "LinuxBoot" is trying to replace most (or all) of UEFI on systems it can support, not just grub. UEFI is also multi-megabyte, and arguably more of a "clusterfuck", forked from intel's UEFI upstream some years ago and sloppily adapted by motherboard vendors.
For example, if you fix a bug in an upstream Linux driver needed at boot time, then both your production system and your bootloader will automatically get it (once you rebuild and reflash). With the traditional UEFI setup, you have two separate codebases, each with their own sets of bugs. I don't see how that is simpler.