I doubt nVidia will be competing against x86 but rather x86-64, which has many of the "legacy" features you're no doubt thinking of (e.g. segment registers) removed or sequestered.
64-bit is a big deal on x86 because with it came a couple extra registers and a better architecture overall. On modern machines, 64 bits usually only means more addresses, because they have plenty registers already.
I remember that back when the 64-bit UNIX was introduced in, IIRC, SGI machines. Nobody made a big deal of it. SGIs were already very impressive and nobody cared that much for the extra bits in registers.
Back then, RAM was vastly more expensive than it is now, and processor speeds weren't great for processing gigabytes of data. Nowadays, the bigger address space is a big deal. Some of this is the OS (Windows is the biggest offender with its <2GB available user space addresses).
Even with Eclipse running and all of the memory allocated, less than 4 GB are currently being used for program workspace. The rest is being used by buffers. It's one of the cases when a PAE-like memory model would suffice.
I cannot remember when was the last time a single program wanted more than 4GB (I can, actually, it was Firefox and I left it running with a page full of Flash thingies for the weekend - by Monday, it was unresponsive and I had to xkill it). I can agree we need 64-bit addresses for servers (and we have been needing them for quite some time now) but not for desktops and certainly not for my own computers.
Commercial PC game developers constantly run into the ~1.7GB address space available to user apps. You can forget memory-mapping any asset files. Windows can be booted with support for 3GB/process (same as the default on most Linux distros) but that's useless for mass market stuff. Even just running a 64-bit OS gives 32-bit user processes 4GB address space to play with.