What does matter is standardization. For example a booting process. When I have x86 image of Windows/Linux, I can boot it on any x86 processor. When I have ARM image, well, then I can boot it on a SoC it is built for and that's big maybe because if outside peripherals are different (i.e. different LCD driver) or lives on different pins of SoC, then I am screwed and will have at best partially working system.
Standardization is something what will carry x86 very far into the future despite its infectivity on low power devices.
> When I have ARM image, well, then I can boot it on a SoC it is built for ... or lives on different pins of SoC, then I am screwed and will have at best partially working system.
That's not even the half of it either... what firmware does the board run? U-boot is nice, but sometimes you aren't lucky and you're stuck with something proprietary. Although if you're extremely lucky, you'll have a firmware that supports efi kernels.
When I have x86 image of Windows/Linux, I can boot it on any x86 processor.
That is largely due to IBM choosing x86, and the PC taking off in a huge way with its de-facto "open" design that ended up being successful and kept backwards compatibility. One could easily imagine a world in which IBM chose ARM (and it was invented earlier) for its first PC, and proprietary x86 SoCs based on Intel's cores are everywhere instead. A world in which CISC is the new fad.
I'm curious what those would be in the modern era?
The "non-PC" x86 chips I can think of:
80186/188 - Sort of predates the IBM hegemony, can be hammered into shape by adding replacements for onboard peripherals
80376 - a failed long-gone experiment
386CX/EX - not entirely sure how incompatible they are
Xeon Phi/Larrabee/Knight's Corner designs -- not really SoCs so much as special purpose acceleratirs.
They are basically a 486 pipeline with some Pentium instructions.
Unfortunately it seems Intel didn't realise that x86 without the PC legacy is worth little, so their attempts at non-PC x86 have mostly failed. On the other hand, "PC-on-a-chip" SoCs like https://en.wikipedia.org/wiki/Vortex86 have enjoyed more popularity.
This might be true, but the world we live in where x86 is the open platform and ARM is a mess of incompatibility.
You want to install Ubuntu on your laptop? Download this ISO and you're good.
You want to install LineageOS on your phone? You have to download the exact binary for your phone (which means LineageOS needs to maintain those hundreds of versions) and hope your phone is supported.
The BIOS (or UEFI) is not used by most OSs which aren't DOS. The compatibility comes from the standard peripherals (DMA, PIT, PIC, FDC, 8042) and PnP interfaces like PCI (as well as standardised interfaces located behind them, e.g. USB OHCI/UHCI/EHCI/XHCI, SATA BMIDE, LPT, VGA, etc.)
The BIOS (or UEFI) provides a standard interface to load an operating system which can then discover which hardware is installed.
It's true that another factor is the software discoverability of hardware. A lot of stuff on ARM platforms is not discoverable because those platforms are intended to run specific software.
> When I have x86 image of Windows/Linux, I can boot it on any x86 processor.
Where this gets absurd is modern Debian supports i686 and up. You should be able to get a 27-year-old Pentium Pro to boot the same image as a Raptor Lake CPU.
My memory is that the pentium pro had a common memory config of 256MB, way below what modern installers are going to expect. I'm sure you can get linux to install on 256MB, but I doubt it's going to work on the current RHEL/SUSE/Ubuntu installers.
Not to mention various drivers have gone without maintainers and pulled from the upstream linux kernel.
Not to mention dropping IA32 and related PAE support.
With the push for ARM in the datacenter, ACPI adoption is on the upswing. In theory, ACPI could be used on consumer devices as well, there's just little incentive to do that right now.
The flip side of this is that you can almost always get exactly the SoC you need with an ARM, which makes it great for embedded applications. But yeah, lots of custom board bring up…
Yeah, but on the other hand these ARM platforms have different capabilities BECAUSE they are not standardized.
All PCs have some kind of fast-updating display output (often HDMI) and a BIOS to emulate a CGA card from 199whatever on this interface. My PineNote has an e-ink driver connected by something called EBC. Is that compatible? Maybe, if someone writes a BIOS to make it work. And that wouldn't be amiss, actually, even though e-ink display isn't optimal for software designed for HDMI, it would at least make it possible to get something working quickly.
It also has a touch stylus and a battery - among other peripherals. Can you tell me which BIOS function code gets the X/Y position of a touch stylus? There isn't one - it would be a non-standard extension and we're back to square one. Or should the BIOS implement an on-screen keyboard? Every PC has a keyboard.
That was mostly IBM's doing and goes well beyond the purview of the ISA specifically. x86 doesn't really define that you "must use a BIOS/UEFI/etc." You can't boot, for instance, a standard copy Windows or Linux on a Sony PS4 (which is not actually a PC compatible, even though it almost seems like one) without ten billion asterisks and hacks.
No you can't, specially in game consoles or embedded deployments, that although migh have x86 CPUs, the motherboard design is incompatible with a standard PC expectations.
Standardization is something what will carry x86 very far into the future despite its infectivity on low power devices.