Hacker News new | past | comments | ask | show | jobs | submit login

The 80486 was only minimally pipelined, and in fact if you squint it fits what would later become the standard model: where an "expansion" engine at the decode level emits code for the later stages (which look more like a RISC pipeline with separated cache/execute/commit stages). That engine is still microcoded (because VLIW might have been rolling but no way can you do a uOp cache in 2M transistors), and still limited to multicycle instruction execution for all but the simplest forms.

Basically, if you were handed the transistor budget of an 80486 and told to design an ISA, you'd never have picked a microcoded architecture with a bunch of addressing modes. People who did RISC in those chip sizes (MIPS R4000, say) were beating Intel by 2-3x on routine benchmarks.

Again: it was the budget that informed the choice. Chips were bigger and people had to figure out how to make 2M transistors work in tandem. And obviously when chips got MUCH bigger it stopped being a problem because dynamic ISA conversion becomes an obvious choice when you have 200M transistors.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: