Hacker News new | past | comments | ask | show | jobs | submit login

That and the x86 ISA is utterly disgusting.



I doubt nVidia will be competing against x86 but rather x86-64, which has many of the "legacy" features you're no doubt thinking of (e.g. segment registers) removed or sequestered.


neverthelss the instruction set and especially encoding of the newer instructions is a nightmare.

I wonder what ARM has to say about 64-bit computing, though


> I wonder what ARM has to say about 64-bit computing, though

Bad news on that one, at least for the first generation of nVidia chips based on the Cortex-A15. Segmented address spaces? They're baa-aack!


64-bit is a big deal on x86 because with it came a couple extra registers and a better architecture overall. On modern machines, 64 bits usually only means more addresses, because they have plenty registers already.

I remember that back when the 64-bit UNIX was introduced in, IIRC, SGI machines. Nobody made a big deal of it. SGIs were already very impressive and nobody cared that much for the extra bits in registers.


Back then, RAM was vastly more expensive than it is now, and processor speeds weren't great for processing gigabytes of data. Nowadays, the bigger address space is a big deal. Some of this is the OS (Windows is the biggest offender with its <2GB available user space addresses).


Even with Eclipse running and all of the memory allocated, less than 4 GB are currently being used for program workspace. The rest is being used by buffers. It's one of the cases when a PAE-like memory model would suffice.

I cannot remember when was the last time a single program wanted more than 4GB (I can, actually, it was Firefox and I left it running with a page full of Flash thingies for the weekend - by Monday, it was unresponsive and I had to xkill it). I can agree we need 64-bit addresses for servers (and we have been needing them for quite some time now) but not for desktops and certainly not for my own computers.


Commercial PC game developers constantly run into the ~1.7GB address space available to user apps. You can forget memory-mapping any asset files. Windows can be booted with support for 3GB/process (same as the default on most Linux distros) but that's useless for mass market stuff. Even just running a 64-bit OS gives 32-bit user processes 4GB address space to play with.


In what way does that matter? Who writes x86 assembly for these chips?

Compilers produce denser code on CISC (x86) than RISC (ARM), so x86 has an advantage over ARM.

http://www.csl.cornell.edu/~vince/papers/iccd09/iccd09_densi...


ARM recommends using Thumb2 for non-legacy software. Thumb2 is denser than x86, so actually ARM is the one with the advantage here (unless you have an existing ARM codebase).


Isn't Thumb targeted for memory-constrained architectures?


Thumb1 sort of sucked at performance so the only people who used it were the ones who were very memory-constrained. Thumb2 is much better.


I guess there is tons on the die just to convert CISC to RISC. That about it.


ARM wouldn't be my first choice either.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: