Hacker News new | past | comments | ask | show | jobs | submit login

I still wonder why AMD didn't opt for a saner encoding for 64 bit mode while still mostly keeping assembly compatibility (32-bit binary code doesn't run in 64-bit mode anyway except some edgecases)



The only one of those edge cases I can think of is in an exploit I wrote (http://www.openwall.com/lists/oss-security/2015/08/04/8). This part need to work when interpreted as 32-bit or 64-bit code:

  1: .byte 0xff, 0xca /* decl %edx */
     jnz 1b
     mov %%ss, %%eax  /* grab SS to display */
  
     /* Did we enter CPL0? */
     mov %%cs, %%dx
     testw $3, %%dx
     jnz 2f

     /* this part knows it's 64-bit */

  2:
     /* this part knows it's 32-bit */
The .byte thing is because 32-bit x86 allows two encodings for decl %edx, but the one-byte encoding got recycled for REX on 64-bit.


Probably this way they could reuse most of the instruction decoder between 32bit and 64bit mode, giving equal performance while saving transistors. I doubt it's related to compiling tools - instruction encoding dealt with by the assembler and those are easy to write. The actual changes that would have to be done to the compiler because of switching to 64bit are independent of the instruction encoding.


I also wondered about that when AMD64 first came out, and my feeling was "because they wanted to make it harder for Intel too". At the time, Intel was likely already developing a 64-bit extension of x86.

On the other hand, I think the 16 to 32-bit extension (with the 386) was done quite well. 16 and 32-bit code can coexist, and it's even possible to use the 32-bit registers in 16-bit mode; not so with AMD64. It's not too difficult to figure out how to add 64-bit support in a more non-disruptive way, without having to do silly things like removing instructions and features that they had to reintroduce later [1][2].

[1] http://www.pagetable.com/?p=25 [2] https://en.wikipedia.org/wiki/X86-64#Differences_between_AMD...


perhaps it has something to do with the fact that it was easier to port existing compilers and perhaps even designing the first microarchitecture to support it and give decent performances without having to wait too much and loose competitive advantage.

I guess that 32bit mode could use most of the same microarchitecture and thus guarantee that during the transition period people would still buy those new chips.

Itanium was a good example of such a strategy that failed. However there might have been other reasons as well.


I wouldn't call Itanium 'such a strategy', because it wasn't even similar to x86. 64 bit mode could have rearranged the instruction coding while keeping everything from ASM up roughly the same. Few single byte opcodes and bigger ranges for prefixes to improve density. Keeping all parts of register specifiers together to improve sanity. Etc.


Possibly to allow parts of the instruction decoder to be shared between modes?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: