I had no idea UEFI booted directly into long mode... I feel like a moron who hasn't kept up with the times, but admittedly I'm not super into kernel dev. Can I just ignore segmentation entirely? It has always been my least favorite part of programming on x86
> I had no idea UEFI booted directly into long mode
I was really surprised to learn that it requires PE executables and Windows' custom x64 calling convention. I know it's just a small technical detail for booting ... but it's kind of scary that Microsoft had so much sway in the design of how our future PCs will boot =(
Another small technical detail. Windows has a mechanism to sign executables. The certificate in this mechanism is defined with the following structure [0]:
typedef struct _WIN_CERTIFICATE {
DWORD dwLength;
WORD wRevision;
WORD wCertificateType;
BYTE bCertificate[ANYSIZE_ARRAY];
} WIN_CERTIFICATE, *PWIN_CERTIFICATE;
UEFI has a shockingly simmilar structure [1, page 1812]:
Oh that is not the really annoying part. By spec UEFI only requires support for booting from FAT (the particular flavor of FAT they implement) volumes, so, surprise, I've yet to seen any UEFI BIOS that supports booting from anything but a FAT filesystem.
> PE is a modified version of the Unix COFF (Common Object File Format). PE/COFF is an alternative term in Windows development.
However, there's little information on its license (it says it's a standard currently developed by Microsoft, but that's about it). And COFF is pretty much the same, but older and developed by AT&T[1].
Still, compared to other Microsoft "products of old"[2], there's a surprising ammount of documentation and analysis into the PE format.
[2] I don't compare it to the newer open source stuff because it'd be unfair.
[3] https://msdn.microsoft.com/en-us/library/ms809762.aspx -- I was going to link a few other articles (some from the References section of Wikipedia) but most seem to be broken now. Still, there are some interesting paperes from digital forensics people that analyze in-depth the PE format, mostly for malware scenarios.
It depends. Cheap Intel boards come with a 32bit UEFI (for whatever reason, it's not like UEFI is paid by the address bit), leading to all kinds of issues.
>(for whatever reason, it's not like UEFI is paid by the address bit)
32 bit Windows 10 is 4GB smaller than the 64 bit version. This matters on cheap tablets and netbooks that come with <32gb of MMC storage. I've heard that 32 bit OEM licenses are also slightly cheaper, but I can't find confirmation of that.
I know we're probably ten years or more out from it (because it'd break way too much software ... I have an application I want to use but can't due to a 16-bit installer ... in 2016); but I really hope Microsoft eventually puts out a 64-bit only operating system. It would be lovely to be rid of SYSWOW64, Program Files (x86), the alternate shadow registry, ~3GB of wasted space for all the 32-bit code, etc.
I intentionally leave off the 32-bit compatibility with my FreeBSD systems, and it's lovely. Of course, it's so much easier there with everything being open source.
I feel just the opposite. I have a pile of the most powerful general-purpose commodity hardware ever built, and it's a damning indictment of the industry that it can no longer run software that used to work fine on tiny machines.
I disagree. Things that made sense on a tiny machine, like the original 8086's wraparound at 1 MB led to hacks like the A20 gate [1]. Or timing loops that were dependent on a processor running at 4.77 mhz, requiring things like the turbo button. Compatibility is a balancing game, because you end up needing to keep bugs lying around that otherwise would be fixed at some point in a newer architecture. I'm more of the opinion that we're much better off explicitly emulating those areas, so they are run in their own context and it's a lot easier to mitigate damage. While compatibility is a positive thing, improvement is a much more positive thing.
I agree that virtualization is better than support for old instruction sets in new cores, but users should expect that virtualization to be ever-present and reliable rather than mutely accept when tools randomly stop working.
> and it's a damning indictment of the industry that it can no longer run software that used to work fine on tiny machines
On the contrary, I think it's an indictment of the industry that we still put effort in hardware into running ancient software and continue to burden the most widespread architecture in the world with 1970s nonsense when virtualization is the obvious solution.
Plenty of people want to run games for the 6502 architecture on their PCs, yet Intel would be out of their mind if they added a 6502 to the die. Why do we not treat the Intel 8086 the same way?
There's also no PC manufactured in the past two decades that can run it. Real mode has nothing to do with the challenges necessary to emulate that demo.
(There's also the fact that the 8088 MPH demo is not a particularly compelling use case, even if the hardware were present.)
If they wanted to, they should have done it in 10. They can't now, because they promised to support all the devices it could be installed into for their lifetime. Some of those devices are 32bit.
64-bit Linux kernels can be configured to so that they can be booted by 32-bit UEFI. It's messy but it works. Even runtime services like UEFI variables work.
I think the idea is that long mode can be fused off. IIRC there's some MS requirement that boxes that have 64-bit CPUs must use 64-bit Windows, so keeping 32-bit firmware and adding a fuse works around it for users who care.
You still need a GDT, but most of the values are ignored in long mode. There is still limited support in the form of changing the FS/GS segment bases, which is useful for the SYSCALL/SYSRET instructions.
> Despite my strong attachment to the Rust community and Rust’s perfect suitability for kernels1, I’ll be writing the kernel in C. Mostly because it is unlikely people inside the university will be familiar with Rust, but also because GNU-EFI is a C library and I cannot be bothered to bind it. I’d surely be writing it in Rust were I more serious about the project.
This is quite an old version of rust[1]. I tried this a little while ago (but got frustrated trying to link it)
[1] For example:
* `u16` would now be: `const u16`
* `int` is no longer a type.
* `slice::as_ptr` is now a method, so using transmute to extract the pointer from a slice is probably not ideal.
* `core` is a crate that includes things like transmute so you don't need to manually include rust intrinsics etc.
You still can't just ignore BIOS. It might be on the deathrow, but it's still got plenty of life in it. Hell, knowing Intel, x86 BIOS will be supported until x86 itself keels over.
> Bios is wonderful because it's limited; Nobody wants to try shoving more functionality into it. No system management, no drivers, nothing.
Haha oh if only. I've probably worked around more UEFI bugs than any other single individual, and I've still had to deal with more non-UEFI BIOS bugs in my life. BIOS provides no kind of standardised interface to the OS, and so every vendor built their own and every vendor fucked up at some point. UEFI means there's much more shared code than before, and we're definitely benefiting from that.
Not even that! It loads the first 512-byte sector of your boot disk and jumps to it in 16-bit mode. Usually that bit of code loads in a few more sectors (using BIOS disk IO calls), enough to load a real bootloader's image (like GRUB), which subsequently understands your filesystem and loads the real kernel.
It's kind of a hack but I completely agree, much better to leave it in the hands of the systems developer -- firmware that tries to do too much often just gets in the way.
BIOS just runs whatever is in the first 512 byte sector.
Unless it fits in 512 bytes this is not a "kernel" but is just instructions to jump to another address, where there is more small bootstrap code that loads another program, maybe a bootloader that loads another program, maybe a kernel.
Those who multiboot Windows with some other OS that uses disklabels may notice that Windows expects to always be the first OS. Any other bootstrap code put into that first sector will be overwritten by a Windows install.
PXE is good (someone needs to sit down with me and a bottle of rum and work out why PXE behaves differently after a hard reboot in VirtualBox), but I've got to hand it to Apple, Target Disk Mode is nice. They also had some kind of multicast netboot-installer before most OSes.
It may also initialize some of your hardware by calling the firmware supplied with that hardware in ROM (for instance: video cards, network adapters and RAID controllers).
(Off-topic) I was curious to see what other information that site might offer, and was amazed to find a curated list of hundreds of other useful discussions.
Yep, the Yarchive is awesome. Some of the older discussions on computer architecture and programming languages have real historical value and are hard to find anywhere else on the net. Great work Norman did there.
Does anyone know why the references to the original threads (through Google Groups) don't work anymore? Did Google Groups stop carrying the content or can't they search by Message-ID anymore?
Does anyone disagree that UEFI has the capability to be an OS itself?
If it is only used as a "BIOS" then is it unreasonably adding the surface area for bugs and attacks? Is it much larger and more complex than legacy BIOS?
Is this trade-off proportional to the benefits it provides: obviating need to for developers to understand backwards compatibility?
The point of UEFI is not to avoid the need for kernel developers to know how to get into 64-bit mode. If you can't manage to get a computer to do that pretty easily, kernel programming is not for you, and this is mostly limited to the bootloader anyways.
As much as people like to rag on it, UEFI provides a lot of benefits:
- UEFI graphics firmware improves compatibility, and makes things like PCI pass through of GPUs to virtual machines much easier/possible.
- UEFI allows for much faster boot times by cutting out a lot of 16-bit mode/32-bit mode transitions that BIOSes generally used.
- Every operating system doesn't need to fight over the master boot record of your drive, with UEFI they can live in relative peace.
- Things that were simply impossible with a lot of BIOS implementations (e.g. Booting off > 2TB hard drives) are now possible.
It's over-engineered, but so is ACPI (IMO To an even greater degree). Does anyone want to return to the old plug-and-play compatibility mess?
> It's over-engineered, but so is ACPI (IMO To an even greater degree). Does anyone want to return to the old plug-and-play compatibility mess?
In my experience, those who like to rag on UEFI, for whatever problem they deem it to suffer from, are rarely interested in providing other options or solutions to the problems UEFI provenly solve.
That's true of any piece of tech. The solution to C++'s problems didn't come from the people who avoided that language entirely, it came from Mozilla, who are up to their eyeballs in C++. People who hate Sexprs still haven't come up with a good replacement.
They've got XML
Many of the people behind that were Lispers. And anyways, I said a GOOD replacement.
But yeah, the solutions to a problem with a system don't come from the people who hate its guts and won't touch it: It comes from people who use it, need its functionality, and need those warts fixed.
Ummm... you know Intel developed EFI in order to kill off BIOS, right? I remember being on some early phone calls about it over 15 years ago before the project was public. At the time, I thought it was a wonky idea, and I stand by that assessment.
In case others find it interesting, I tried to look around for any relationship between linuxbios/seabios and uefi - and found some references tianocore and edk/edk II:
Looks like seabios is (mostly) happy being a bios (with possible plans to be packaged as an "csm application" (I'm guessing that means being loaded from efi)). But, s far as I can tell, for a free/open uefi toolkit, tianocore is the project to look at.
I'm a little disappointed that I couldn't easily find a ready-to-run uefi space invader or tetris clone... ;)
[ed: eh, I see tfa links to tianocore's ovmf code for vms... Oh, well... ]
Well my initial thought was a PCIe card, but it could likely just as well be an SD or such if set up right. And yeah, you can get an approximation of it using _nix and clever mounts.
But i was thinking about having the initial OS sit on an actual ROM chip with a flash chip next to it on the same board/die.
Sadly, UEFI implementations are as buggy as anything else. Missing start of Option ROMs? Invalid GPU initialization? I have hit this and more with both Phoenix and AMI. And also security issues thanks to magic buggy Intel blobs.
At least there is a valid, but hard to compile and deploy reference implementation.
It is funny that CSM mode is so much more polished and tested than pure UEFI one. And that so many cards still do not provide compatible firmware.
"Huehuehue" is the Brazillian equivalent of "hahaha".
But since "huehuehue" sounds like a really crazy laughter to English speakers, when English internet culture clashes with Brazillian internet culture, this "huehuehue" really stands out to English speakers and seems hilariously overused.
As a result, English speakers have started jokingly immitating it and it turned into a meme of sorts.