Hacker News new | past | comments | ask | show | jobs | submit login
Revisiting the Intel 432 (2008) (dtrace.org)
31 points by mpweiher on Jan 12, 2019 | hide | past | favorite | 15 comments



Their next design and failure was i960 in BiiN project (below). Although BiiN failed, the i960 looks like a big improvement on the i432 balancing aspects of RISC simplicity, object architecture, and reliability mechanisms. It looks like a nice processor. Too bad it failed in the market. It did have some interesting use cases, including the F-22 fighter.

https://en.wikipedia.org/wiki/BiiN

https://en.wikipedia.org/wiki/Intel_i960

http://bitsavers.org/pdf/biin/BiiN_CPU_Architecture_Referenc...


Very interesting references, that was the first I had heard of the BiiN project.

> ...failure was i960 in BiiN project...failed in the market.

I was confused by that for a bit because my vague memory was that the i960 was a roaring success. The Wikipedia page you cite seems to agree:

> It became a best-selling CPU

Did you mean the i960MX that was used in BiiN?

> The i960MC included all of the features of the original BiiN system; but these were simply not mentioned in the specifications, leading some[who?] to wonder why the i960MC was so large and had so many pins labeled "no connect"


Yeah, the i960MX that had all the good RAS and security features. That would be useful today. Good news is we have stuff like CHERI and CoreGuard that someone might turn into systems we can actually buy for security-focused applications.


It would be nice if the RISC-V community adopted the lessons from CHERI while the install base is small enough to allow substantial changes to the architecture standard.


So Intel did not learn from it’s history and built the Itanium

“....This model had many ramifications for the hardware, but it also introduced a dangerous dependency on software: the hardware was implicitly dependent on system software (namely, the compiler)...”


I think most people takes the wrong lesson from 432 and IA64. The problem wasn't that hardware was dependent on software, it's that the chips exposed too much of their internals to would-be adopters. GPUs are also extremely dependent on software/compilers, but few people have to care about that because Intel/AMD/Nvidia don't expose the internal ISA of any of their chips.


That's not exactly true. The main advantage GPUs have is they are not general-purpose, they are accelerators. NVidia made GPGPU successful by saying "throw away your C/C++ code and use this entirely new language (more appropriate for the hardware) to program this thing", and backing that up with impressive performance improvements. Intel failed with Xeon Phi because, essentially, they promised you wouldn't have to rewrite your code to get the performance improvements, but the performance improvements were anemic if you didn't rewrite code.


CUDA is now 11 years old, older than x86 was when the 860 came out. Internally the GPUs today are radically different but they run the same software.


Remember the other Intel ISA that could be described by this slightly elided text from TFA? "This model had many ramifications for the hardware, but it also introduced a dangerous dependency on software: the hardware was implicitly dependent on system software (namely, the compiler) for efficient management[...]. As it happened, the needed compiler work was not done, and the Ada compiler as delivered was pessimal[...]. [T]his software failing was the greatest single inhibitor to performance"


Sounds a lot like Itanium and the original Xeon Phi architectures (Knights Corner and Knights Ferry). They were both very dependent on the compilers being smart enough to keep everything occupied constantly.


Computers are for crunching numbers in the world they were aiming at.


Maybe one lesson is just that hardware companies often radically underestimate how much effort will need putting into the complementary software.

Compare the start of the touchscreen phone era: it looks to me like companies like Nokia lost to Apple primarily because they didn't have good enough software to go with the hardware.

You can point fingers at how Nokia had three different platforms fighting with each other, but maybe that's just a symptom and the real cause was that the software side needed several times the resources it was given.


It's surprising that it was strictly memory to memory. Register-starved architectures like the 6502 could be performant, but only in hand-coded assembly. Intel had to have known this and yet designed an architecture with no registers that was almost entirely at the compiler's mercy for throughput. Other blinkered things like having only immediates for one and zero pale by comparison.

When you can't get the basics right, it's inevitable that the OO aspects of the 432 would have failed.


> Intel had to have known this and yet designed an architecture with no registers that was almost entirely at the compiler's mercy for throughput.

They repeated this pattern with Itanium/VLIW, but this time the marketing was much better to the point that other vendors all but deprecated their architectures before Itanium was even released.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: