Hacker News new | past | comments | ask | show | jobs | submit login

I can think of a few.

Unbundling the engineering model as the current system is highly integrated - do certain segments of the process better than the incumbents. This would be primarily a business model innovation, not an engineering one.

New architectures, as Intel have shown lately, their architecture isn't perfect. This requires a new set of design thinking from the ground up and challenging existing assumptions about Von Neumann architectures and instruction sets. Risc V gives some hope to this idea.

Non-silicon (Photonics, GaN, Diamond, Quantum, etc.) computing technology would require new skills with different materials that the incumbents don't possess. Still years away though.

Commodity EUV and further process simplification would greatly reduce the barrier to entry but requires new uninvented engineering technology and practices.




Risc V is made to be boring. It does nothing to "challeng[e] existing assumptions about Von Neumann architectures and instruction sets". It is also very unlikely to become the next high perf ISA for tons of reasons, it is more suited for embedded stuff.


Risc V doesn't touch on Von Neumann architectures. That could be addressed by tech like Memristors as an example but to put it lightly, they're not ready yet.

But the belief that to be a competitive fab house you have to churn out high performance chips is in itself one of the existing unquestioned assumptions of chip manufacture which may not pan out in the longer term. The idea of the generic CPU may very well become seen as a luxurious, wasteful idea once Moore's law properly runs out of road in a few years. Specialization will breed new ISAs, even boring ones.


> The idea of the generic CPU may very well become seen as a luxurious, wasteful idea once Moore's law properly runs out of road in a few years. Specialization will breed new ISAs, even boring ones.

This is already the case and I suspect the current general structure will continue mostly unchanged: you will still need your general purpose high perf generic CPU for the mostly the same workloads we use them for today (and that's including to run legacy software), and for now there is kind of only one broad successful approach to design them (for mass produced things, at least). Then in embedded chips you can use basically anything, and you also have way less stable ISA in chips more dedicated to massively parallel compute.

Even with JIT you can not really multiply the basic GP CPU ISA ad infinitum, because for bulk system code JIT is not that viable (even if it is for big apps). Also, this is basically attempting to deport the stable interface problem in another layer, but you can not necessarily remove all the features that made it possible to have a stable ISA, given tons of them are also needed for perfs. And they are since a very long time. So for even just semi-fast general purpose CPUs, I suspect the race is mostly over (hypothesis: higher level computer topology unchanged -- if you switch to e.g. chip stacking, things could change more)

For all the other cases, and you are right they are also massively important, things will continue to evolve in tons of directions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: