A more appropriate title, and actually from the article, may be: "Weird and Innovative Chips".
Loved this post, anyways. I think it's important to look at radical and strange computing paradigms from the past, even if their DNA is not obviously in today's mainstream architectures.
Would be interesting to see of there are newer examples of these weird chips. After all, the post is only from 2005(from copyright info at bottom of page).
The Tile chips and descendants made by Tilera and now Mellanox might fit the bill. They were noted in the press quite a few times back in 2010-2013 or so. They had 64 cores (sometimes other numbers) and could run Linux (or a bare metal environment). The idea was/is to have a grid of identical cores which can each talk at high speed with low latency to their four neighbors. The "outer" cores used their outward-facing bus to talk somewhat directly to DDR, Ethernet, and other ports. If a core in the middle wanted to do an Ethernet send, it had to send the data to its neighbor and so on until it reached a suitable "outer" core. And you could (perhaps even should) program these communications explicitly.
Linux and GCC support meant it was easy to program, but perhaps not easy to program well enough to beat x86 (which for most potential customers was probably the more relevant comparable, rather than the DSPs, FPGAs etc that Tilera suggested might be replaced).
Other interesting systems worth poking your nose into are the c.mmp and cm* architectures developed in the '70s at Carnegie-Melon. Also ncube.
IIRC c.mmp was a multi-cpu-multi-memorybank setup where any cpu could connect to any memorybank via a crossbar switch. cm* was (I think) some sort of multi-cpu-multi-memory architecture with a packet switched bus as interconnect.
They all predate any possibility of being 'chips', so by that criterion wouldn't have qualified for this article/book, but nevertheless are probably still interesting.
That's a somewhat misleading response. It isn't mentioned in the linked article, "Weird and Innovative Chips", but in an entirely different section called "Unix and RISC, A New Hope".
The linked "article" is one chapter in a bigger work, and they only put chips in one section even if they could be in two. It has an article in the greater work as this is one chapter and shouldn't really be read in isolation as the rest of the work references chips in different sections.
One wonders if some of these designs might make a comeback as Moore's law slows down.
For the longest time, processor performance was dominated by who had the best manufacturing process or, more recently, who could keep up best with fab's updating processes.
Interestingly enough, the AT&T Hobbit processor was used in the original BeBox which ran BeOS (Which we discussed the other day.) Though only about 30 machines were made and were only used for internal development.
I was not impressed by the i432 architecture. Like the Multibus, it seemed to be trying to be very fancy, but with no elegance or taste. The result was an architecture that... um... might possibly have worked. It's still an ugly, tasteless architecture, though.
The i432 was well before my time, so it's hard to account for the privilege of hindsight, but I found myself looking in horror at most of its design. Hardware needs to account for the fact it's ultimately a piece of physics, and the i432 dismissed this completely.
It's a real shame because the failure of old innovative architectures—faults ultimately down to poor and complex designs—have burned people enough that few people dare to try again. For all its flaws and forever-vaporware status, the Mill shows that you can make an architecture much safer without paying extra as long as you do things with principle.
You must be thinking about Multibus II. The original Multibus (IEEE 796) was very basic. Historical note, the DSP satellite (NORAD missile warning) ground network ran on Multibus based systems from 1988 until 2005.
It's still an ugly, tasteless architecture, though.
That is a bit odd thing to say about it, considering that many of the goals are being resurrected these days because its become clear that the RISC/UNIX model is insufficient for modern computing. Sure they may be "pretty" and KISS, but they quickly end up being polluted by ugly hacks to make them both perform well and more recently have layers of cruft added for security purposes. You need look no further than aarch64, particularly the 8.x supplements, or the ARM CHERI efforts to see this in action.
Loved this post, anyways. I think it's important to look at radical and strange computing paradigms from the past, even if their DNA is not obviously in today's mainstream architectures.