Absolutely. It's very subtle humor. Students of computer architecture consider x86 to be one of the least elegant architectures around. Its many warts include segment registers (originally a hacky workaround to stretch 64k of memory to 1M), and an extremely complex instruction encoding employing prefix bytes. Many of the legacy issues (such as not having enough registers) have been papered over, leaving traces behind. Many people felt that the complexity would doom the architecture, and that a cleaner, leaner RISC approach would win out.
However, Intel has used their advantage in process technology to throw massive amounts of transistors to make up for the problems caused by all this complexity, and has done well. RISC has done well in the mobile space because those transistors tend to be power-hungry, but everywhere else x86 is today almost the only game in town.
One reason it's especially funny is that "HLT" is one of those legacy instructions that has pretty much no use in a modern system, yet takes up a whole slot in the byte encoding, while common operations like MOV or ADD often require extra prefix bytes to specify the size of the operands.
Segment registers did not evolve from the hacky address space expansion mechanisms. It may look that way looking at nothing but the Intel history, but the descriptor-style segment registers existed in mainframe architectures before the 8086/88 existed. The 8086 has trivial segment registers (which were just scaled offset addresses) which then morphed into mainframe-like descriptors of the successors (registers being indices into tables of segment descriptors). That could have been a plan all along, though.
> Students of computer architecture consider x86 to be one of the least elegant architectures around.
I guess it depends when and where one studied.
Having grown with Z80 and x86, it surely looked kind of alright to me.
I only missed the flat addressing from 68000, but given that I only had access to it on Amigas available at some dev meetings, it wasn't something I bothered much with.
Also I don't remember anyone jumping of joy during the MIPS assignments (using SPIM).
HLT is absolutely used in modern systems, ARMs have the WFI and WFE (wait for interrupt, etc).
They're essential for dropping in to low power modes, though admittedly HLT wasn't used for that back in the day.
It does have a slight advantage over ARM / most other RISC architectures in that the instructions are fairly small, meaning that you can get quite good decoding throughput without going wider. That advantage doesn't get entirely cancelled out by how badly allocated things are, since instructions can decode to multiple "actual" instructions (µops).
I'm still curious as to how Intel thought a mobile x86 chip could ever work.
HLT is still used by kernels when they want to idle the processor until the next interrupt. There are many examples of instructions that aren't actually used much if at all nowadays, like POPAD and PUSHAD, or the binary-coded decimal instructions
However, Intel has used their advantage in process technology to throw massive amounts of transistors to make up for the problems caused by all this complexity, and has done well. RISC has done well in the mobile space because those transistors tend to be power-hungry, but everywhere else x86 is today almost the only game in town.
One reason it's especially funny is that "HLT" is one of those legacy instructions that has pretty much no use in a modern system, yet takes up a whole slot in the byte encoding, while common operations like MOV or ADD often require extra prefix bytes to specify the size of the operands.
Hope that helps!