It's interesting to see that, although what we call now a "microprocessor" had already been invented, in that paper they use the same word to describe a CPU implemented using microcode (and not a very small processor that fits in a single chip).
A little background here - Lisp is an ideal language for genetic algorithms because its syntax maps directly to S-expressions which can be represented as a tree (warning, PDF):
This tree can be directly represented as a binary string which represents the DNA of the genetic algorithm.
So evolution is easy to simulate by randomly mutating nodes in the tree. Other operations like crossover (exchanging branches) and sexual reproduction are also straightforward.
Personally, I think that genetic algorithms are more approachable than neural nets and would like to see them emphasized more in machine learning (especially with modern video cards). When you think about it, both approaches are searching for local maximums/minimums in an effectively infinite search space. They are basically solving a giant matrix to find the function that maps inputs to outputs. Sure genetic algorithms generally take more computing power, but that becomes less of a problem with each passing year. Given enough time, evolution always wins!
Not much about Lisp here. It describes the design of a microcode-based machine that could be used for all sorts of things. You could use it to implement a PDP-11, an x86, or whatever (though it's biased towards 32-bit machines).
Google "microcode" and you'll get a start. Hennessy and Patterson will carry you further.
I will admit that this stuff is way over my head, but it's things like this that make me wonder why the Lisp machines/processors never really caught on...can anyone here shed some light into this for me?
Moore's law and economies of scale wiped out any advantage that dedicated hardware could give.
It wasn't just Lisp or Smalltalk machines -- by the late 80s, early 90s, anything that wasn't an x86 machine was pretty much wiped out apart from the small percentage of the market that Apple held. Things that had been quite superior, like the Amiga's custom graphics chipset, the Motorola 68k CPU lineup, the DEC Alpha processor couldn't compete on cost at all -- let alone bespoke, effectively custom, architectures with exotic microcoded support for Lisp or Smalltalk, etc.
When something cheaper and faster comes along. Until then the GPU isn't seeing much competition. Perhaps the TPU/AI chips might gobble up their market one day.
Basically, lisp machines were expensive workstation computers, and PCs killed all workstation manufacturers. Sun SPARCStations and Silicon Graphics machines bit the dust as well. Nobody could justify the cost anymore.
>lisp machines were expensive workstation computers
In most cases, yes, but the language also found its way into some very unexpected places. For example, a subset of Common Lisp, "L", was at the core of Prof. Rod Brooks' Subsumption Architecture which enabled the amazing mobile robots built in his lab. He ran Lisp on 16mhz 68332 embedded processors. https://www.researchgate.net/publication/2949173_L_--_A_Comm...
Essentially, general-purpose CPUs became better at implementing LISP than the more specialized processors. A secondary cause is the onset of the AI winter, which meant that the major market was for these machines went away. (These factors also crushed Japan's Fifth generation computer project, which bet heavily on a logic programming paradigm for AI that became completely irrelevant).
UNIX / C won, and, like the Smalltalk vendors, Lisp Machine companies believed their own hype and thought the superior product (with a high price) would win them the day. Turbo Pascal and C were good enough, followed by good enough from Microsoft, followed by good enough Linux / BSD open source.
I do wish someone would revisit these machines in the modern era, but it looks like the more conventional RISC-V will be the Linux of chips. It seems like you could build a Connection Machine on a chip these days.
Huh, I wonder if you could fit a CM-1 on an FPGA now. The small model had 16k 1-bit processors, with 4k bits of RAM each. (I haven't played with FPGAs yet and don't know what's available.)
Here is a picture of the author of that paper working on the CADR machine (the successor to the CONS machine): http://www.computerhistory.org/chess/stl-431614f64ea3e/