Hacker News new | past | comments | ask | show | jobs | submit login
A Brief History of Lisp Machines (2008) (andromeda.com)
106 points by type0 on Sept 28, 2017 | hide | past | favorite | 93 comments



I was around the AI Lab at the time (working for the EECS department sysop'ing their undergrad Lisp Machines and DEC-20), and the dream was to incarnate Lisp in hardware and surpass the slow, multi-user PDP-10's/DEC-20's in use at the time. Plus, the sheer temptation of running Lisp "all the way down" (well, not in microcode, though that was written in a Lispy assembler) was just too great to resist. Plus it was early in the workstation era (Sun, Apollo, etc) and people were hooked on the idea of personal graphics-based workstations.

Within a few years, though, desktop machines running optimized Lisp compilers were surpassing the raw performance of the Lisp Machines, though of course there was nothing like the whole system (Genera).

In the end, "worse is better", plus "AI winter" happened, which put the Symbolics/LMI/TI LispMs out of business.

But something real was lost--I don't think the world has ever seen a more powerful development environment.

One step forward, and four steps back...


There's no technical reason a Lisp processor couldn't have followed the same development path as workstation/PC processors and kept up, but it didn't have the volume to sustain the development effort.


Well, being a very CISC-y architecture means they couldn't have kept up with Intel, who spent/d billions on their line.


The last CPU that was being developed at Symbolics is actually RISC: https://en.wikipedia.org/wiki/Symbolics#Sunstone

The competitor LMI was also developing their next-generation Lisp machine RISC processor: https://en.wikipedia.org/wiki/Lisp_Machines#GigaMos_Systems

More about the latter: http://fare.tunes.org/tmp/emergent/kmachine.htm


Well, Symbolics did not manufacture the new CPU. That would have been too costly.

LMI manufactured prototypes of their CPU, but these were older technology - not microprocessors, but VME-board sized CPUs.

Xerox mantioned that they were developing a Lisp RISC, but it never surfaced. The SPUR project designed a Lisp RISC Chip: ftp://ftp.cs.wisc.edu/markhill/Papers/jsscc89_vlsi_spur.pdf


? x86 is obviously very CISCy. Another example is IBM's Z architecture which they keep making faster and faster. The argument is _economical_ not technical.

IOW, the architectural benefit for Lisp machine weren't enough to overcome the benefit offered by Moores law. Today, things look different but it would be a much harder sell to push Lisp or any non-C launguage.


I still fantasize about having lisp "all the way down", but I'm not sure what I mean or ought to mean when I am fantasizing. On the one hand, I mean implementing lisp in hardware as fundamentally as possible (don't really know why this would be valuable, but I'm still drawn to it). On the other hand, I mean having lisp be the foundation at the software level (ie., the kernel and the rest of the OS; I can more easily see how this would be valuable).

Can someone here speak to the notion of a lisp machine in the future? Is there any chance this could happen? Would it be valuable? Does anybody else here have this same dream/fantasy/whatever-you-want-to-call-it?

(edited slightly for wording and clarity)


Ever since I found out about Lisp machines I've been a bit obsessed with them. When I got the leaked distribution of Symbolics Genera going in Linux I felt like I was in possession of a crashed UFO. The user experience of the Listener, with its rich output (almost a "scrolling desktop") was exactly what I wanted from a command line. As I imagined what that system would be like if development had continued I started getting so many ideas, it was thrilling.

And at that time I hardly knew Lisp! What excited me was the user experience of the Listener, the notion of programming being a way to use the computer, not just to construct software - and the way each piece of software in the system was practically an API for my own use. This was sci-fi stuff to me, extremely inspiring.

I don't know how we could move towards the creation of modern Lisp machines. Disappointingly for this old Amiga user, we don't really have regular computers that aren't ugly old x86 any more. But I think about trying to work these ideas into my software development all the time. Genera had such an impact on me I can't avoid it.


Consider then that it was what RMS lived and breathed for years.

Also i feel that to a more limited sense this is what draws people to the unix CLI. And in turn what sold OSX to academia beyond media production courses. And something that both Apple and Linux userland programmers ignore at their peril.


>the notion of programming being a way to use the computer, not just to construct software - and the way each piece of software in the system was practically an API for my own use.

I've never used a Lisp machine, but based on your description, it sounds like the experience of using them might have been somewhat like using the Oberon system created by Niklaus Wirth- or the other way around (based on a BYTE magazine article [1] about Oberon that I read, IIRC - never used Oberon either).

[1] I thought so because this part of your comment:

>the way each piece of software in the system was practically an API for my own use

matched somewhat with something I read in that BYTE article, which was something to the effect that once you had written a subroutine in Oberon, it could be called from anywhere in the OS. IOW, in a sense, the whole OS was like a single program, that you could program. Cool concept.

Though the Oberon system might have been much less evolved, or whatever - as I said, used neither, just interested in the thing.

P.S. From the Wikipedia article about Wirth:

https://en.wikipedia.org/wiki/Niklaus_Wirth#Humor

[ Wirth has reportedly told the joke that, because Europeans pronounce his name properly, while Americans pronounce it as "nickel's worth", he is called by name in Europe and called by value in America. ]


I had access to the Native Oberon version during the 90's.

Oberon was inspired by Mesa/Cedar, there are quite a few Xerox papers about how Mesa/Cedar used to work. This is probably the most relevant one.

Think about a Lisp/Smalltalk environment, but based on a strong typed language instead.

Regarding Oberon, it had similar ideas.

Basically there were no programs, only modules (strong typed dynamic libraries).

Any procedure/function that had a special type signature could be called from the CLI environment, UI action, or after selecting an UI element, depending on the way the mouse buttons were used.

Also the environment was focused on graphics not plain text.

The last version before Active Oberon was introduced, System 3 with Gadgets, had quite nice Amiga style GUI.

You can find tons of screenshots on my site.


>Basically there were no programs, only modules (strong typed dynamic libraries). >Any procedure/function that had a special type signature could be called from the CLI environment, UI action, or after selecting an UI element, depending on the way the mouse buttons were used. >Also the environment was focused on graphics not plain text.

Pretty cool. Thanks for mentioning it.

>You can find tons of screenshots on my site.

I googled for your HN username + " site"; is it this one:

http://www.progtools.org/blog.php ?


Yup


I actually stumbled upon Oberon in high school. My computer teacher was the lone Oberon developer I've met.


Oberon was already a later software.

Earlier Wirth designed a personal workstation called Lilith with a stack-architecture CPU and Modula-2 as its system programming language. This was in many ways similar to Smalltalk or Lisp Machines, but with reduced complexity.

https://en.wikipedia.org/wiki/Lilith_(computer)


Interesting. I do seem to remember that there was a BYTE magazine article about the Lilith computer too (or maybe I read about it somewhere else, it was a while ago). Also remember some issue(s) of BYTE that had articles about stack machines, which were a bit of a rage at one time, I think. I had read a bit about some of the pros and cons at the time. Also seem to remember that the JVM implements a stack machine.

Edit: Looked it up - the JVM does seem to:

https://en.wikipedia.org/wiki/Stack_machine#Commercial_stack...


> Disappointingly for this old Amiga user, we don't really have regular computers that aren't ugly old x86 any more.

A Raspberry Pi is "regular computer" enough for me, at least as a hack platform. Ditto all the other ARM SBCs which are largely similar. (Odroid is fairly nice, too.)

The special microcoded systems are dead, but ARM is a nice enough design and it's available from a ton of different sources.


We have Guix as a system configuration/package manager, the GNU Shepherd as an init, and the GuixSD initrd itself, all of which are written in Guile Scheme.

With the GNU Hurd important parts of the OS could be written in Scheme as well.

The problems start with the desktop, where we don't really have anything that's well-integrated and lispy. Sure, there's StumpWM (Common Lisp), and there's Emacs, but they are separate programmes and there's no link between them.

There is McCLIM[1], a GUI toolkit which looks like a continuation of lisp machine ideas, but as far as I know it does not have an active community (unlike Guix and Guile, whose communities actively work on a Scheme-powered operating system).

[1]: https://common-lisp.net/project/mcclim/excite.html


Sadly the GUI world seems uninterested in that kind of "interactivity". Instead they keep taking away options and in general dumbing down the UIs with the belief that this will make computers more approachable for the masses.

But at that point, why bother? Build a games console or a cable TV box with a web browser and call it a day...


I really wish the GNU project would standardise on Common Lisp rather than Scheme. Lisp-2, false NIL, full-powered macros — Common Lisp has a lot going for it.

An emacs written in Common Lisp, running in StumpWM, running atop a CL Guix, Shepherd & GuixSD would be a thing of beauty.


>Lisp-2, false NIL, full-powered macros — Common Lisp has a lot going for it.

I agree, however my reasons for going towards Common Lisp vs Scheme wouldn't really be those, but basically that important stuff is already standardized, proven and documented (i.e. conditions&restarts, object oriented system), while on the Scheme world they are left to "roll your own" status.

I'd say Scheme is a great language for learning and exploring, while CL is a better choice for creating production software.


> Lisp-2, false NIL, full-powered macros — Common Lisp has a lot going for it.

Heh, I find none of these things desirable :) I guess some people are just wired differently. I prefer a single namespace for all values, #F as the only false value, and (optionally) hygienic macros with syntax-case (which does not prevent traditional macros of the defmacro kind).


What's the current take of the Scheme community on syntax-case? Last I looked it was not in R7RS...


It's not in r7rs small, but t will probably find it's way into r7rs large:

http://trac.sacrideo.us/wg/wiki/YellowDocket


The thing that tends to push me into agreement is that common lisp is compiled whereas scheme/guile is bytecode. If we want to use lisp for OS purposes I think it needs to be compiled for the speed benefits. Is there any other lisp or similar language that is compiled?

Maybe I just irrationally don't trust bytecode vm's.


I don't think the distinction is helpful. Guile Scheme is in fact compiled, and the resulting binary is in ELF format. It's still bytecode for the Guile VM, but this does not allow any inference on performance.


I did not know that, thank you for the correction, it certainly opens me up more to scheme. There are a handful of tools that use it that I really want to dig into (gnu mcron for example), but I've been trying to decide if its worth the time investment.


Scheme can be compiled. https://github.com/cisco/ChezScheme


Stallman does not like Common Lisp.


I know, but I just don't get it. Common Lisp sure feels closer to elisp than Scheme does, and I'm constantly missing things from Common Lisp when I'm writing elisp (e.g. read macros, packages or character as a type distinct from integers).

For me, at least, elisp just feels like a primitive Common Lisp while Scheme feels like a completely different language with a similar surface syntax.


My take: he did not like the added complexity in CL compared to Maclisp or even simpler Lisps. I doubt he is a big believer in Scheme either.


I wonder if he really liked Lisp Machine Lisp or ZetaLisp.


That's what he has used and IIRC he claims to have implemented CL.

But if you see what was missing in elisp: object system, closures, keyword arguments, ... That stuff was also in LML.


I don't get the obsession with defmacro vs syntax case. Defmacro can be implemented in syntax case in about 10 lines, whereas the power of syntax-case is very much non-trivial to implement in CL.


I understand that R7RS got rid of syntax-case and standard Scheme is back to syntax-rules only (which is another thing I like about Common Lisp: the standard has been in existence for 23 years).

I'm reminded of this article (https://fare.livejournal.com/189741.html), where this Common Lisp macro:

    (defmacro nest (&rest r) (reduce (lambda (o i) `(,@o ,i)) r :from-end t))
Has to turn into this in Racket (which isn't quite Scheme):

    (define-syntax (nest stx)
      (syntax-case stx ()
        ((nest outer ... inner)
         (foldr (lambda (o i)
                  (with-syntax (((outer ...) o)
                                (inner i))
                    #'(outer ... inner)))
                #'inner (syntax->list #'(outer ...))))))
And was incredibly difficult to reason about. And that author likes Racket.


The Scheme community obsessed over macro hygiene since the 1970's. Yet as of 1997, Scheme still had no error handling mechanism. The R5RS spec spoke about situations which perpetrate an "error", without defining what an error is, how an error can be caught, how an error can be programmatically generated at will, or even whether the image can continue running when an error occurs. (Yeah, macro hygiene will somehow save your project that is confounded by hacks to work around the lack of error handling!)

I'm not going to sit here re-evaluating Scheme, year by year, to see whether it's up to snuff yet.

Someone drop me an e-mail when Scheme gets a well-defined, left-to-right evaluation order, and when its imperative forms all return a stable value like #f or whatever.


    (define-syntax nest
      (syntax-rules ()
        ((nest x) x)
        ((nest x ... (y ...) z) (nest x ... (y ... z)))))
That one is quadratic according to the author, but for 1050 expands of a nest with 6 levels it is less than 10% slower. Whereas in chez scheme, the difference is even less (for more than 2000 expansions)

Edit: Decided to stress test the chez scheme macro expader. for 30000 expansions, it was under 1s using the syntax-case version. I'd say that is very much good enough.


R7rs small is a language in the spirit of r5rs. R5rs large will have a specified low level macro facility, probably syntax-case in one form or the other.

The smaller standard does not specify a low level macro facility. That doesn't mean that an implementation won't have one. No scheme has syntax-rules only.

I agree that the defmacro is simpler, but nobody said otherwise. The nest macro is more or less perfect for defmacro since it doesn't need hygiene.

If defmacro is so Important, there are several schemes with it out there, and if you really need it it is trivially implemented using low level hygienic macro facilities.


How do you describe the semantics of `nest` without showing the implementation?


The MCCLIM community is somewhat active, but very small...


There have been various suggestions over the years: LispOS, Tunes, Loper. There have even been some sucesses: Movitz, Mezzano.

A few years back, I managed to get a Lisp interpreter written in x86 assembly language reading from a floppy and running on the bare metal on an old 32-bit laptop.

I think the leading edge has moved on from Lisp Machines, but as the mainstream took a wrong turning a long time ago, that still has a long way to catch up. For me, the hot topics in system design are capability-based security, dependent types, and live programming.

My "dream" is to have a statically typed Lisp (with type inference) running on commodity hardware (x64), and either compiling to the bare metal (faster) or to byte code (smaller, less work, and portable). This would be image-based, so no file system would be necessary, and have a single-address space, so you could treat the entire internet as if it were part of your machine's memory. It would have a structure editor instead of an Emacs variant.

This is probably more than I'll ever have time to do. I have implemented my own Lisp dialect (which could form the basis of the system proposed above), and am using it to develop a visual dataflow programming language (http://web.onetel.com/~hibou/fmj/FMJ.html). Many here and elsewhere are skeptical of the value of this, but I'm convinced it's the right thing. My short-term goal is to add dependent types to the new language.


I think about this often. I'm a hardware designer (digital design asic fpga), so in theory I could help here. I don't really know what was special about the lisp hardware though... As far as I could tell it was mainly about helping with the type system and getting that to run at a reasonable speed on hardware of the day.

So I've dismissed the idea of a custom processor. It just doesn't seem to have much value vs using ARM or RISC-V or x64.

I've thought about the kernel aspect, but this mainly seems like drudge work. reimplementing stuff that's been a solved problem for a long time with linux. Not to mention drivers, which would be a massive massive effort.

So I sort of settled on the idea of a lisp based userland. That seems at least feasible.

I don't really understand containers well enough to understand what exactly is exposed to a program you are writing. I've heard you can run statically compiled programs without installing a base distribution. So maybe that would be where to start.


"Lisp-based userland" is how I'm going to describe Emacs from now on.


I would imagine that LISPy CPU would have stuff like CONS/CAR/CDR as primitive instructions, and probably memory management heavily optimized for processing lists.


The original LISP machine, the IBM 704, had CAR and CDR as primitives. And boy, were they primitive:

> These names are hold-overs from the original implementation of LISP on the IBM 704. That machine had partial-word instructions to reference the address and decrement parts of a machine location. The a of CAR comes from "address", the d of CDR comes from "decrement". the c and r come from "contents of" and "register". Thus CAR could be read "contents of address part of register".

http://www.iwriteiam.nl/HaCAR_CDR.html


One of the original Lambda the Ultimate papers was "Lambda: the Ultimate Opcode" that described the hardware design of a computer whose ISA was Lisp itself. The paper is full of strange alien ideas like not having an ALU (Lisp is naturally symbolic; why would we need it?).

Guy Steele and Gerry Sussman also made a working processor from this design, but only fabricated a few prototypes (apparently it was absurdly slow, even by their standards). If you come by Gerry's office he'll gladly show one off to you.

http://repository.readscheme.org/ftp/papers/ai-lab-pubs/AIM-...


Why Lisp machine? (To solve this problem which is no longer a problem today. Therefore it would be a solution looking for a problem)

"Why Lisp Machines? The standard platform for Lisp before Lisp machines was a timeshared PDP-10, but it was well known that one Lisp program could turn a timeshared KL-10 into unusable sludge for everyone else. It became technically feasible to build cheaper hardware that would run lisp better than on timeshared computers. The technological push was definitely from the top down; to run big, resource hungry lisp programs more cheaply. Lisp machines were not "personal" out of some desire make life pleasant for programmers, but simply because lisp would use 100% of whatever resources it had available. All code on these systems was written in Lisp simply because that was the easiest and most cost effective way to provide an operating system on this new hardware."


>Can someone here speak to the notion of a lisp machine in the future? Is there any chance this could happen? Would it be valuable?

Yes, of course it would be highly valuable.

The reason of the superiority of a Lisp machine is the following: On a Lisp machine, what you manipulate is not files (text files or binary files), but meaningful information stored as s-expressions that can be directly shared by many applications, instead of being sent (copied) through pipes between processes in separate address spaces. This is where the power of a Lisp machine lies.

This, and much more, is masterfully explained in this paper by Robert Strandh:

http://metamodular.com/lispos.pdf

Recommended reading!!


I used to dream about this too.

Not so much anymore though. I feel like static type systems have progressed enough that dynamic typing really has little appeal to me at this point, especially the idea of dynamic stuff "all the way down."

Nowadays my fantasy is more along the lines of:

* a machine with a simple instruction set for CPU and GPU and without slow transfers between the two

* a modern statically typed language that can be used to program both CPU and GPU, that's basically "Rust, but better, and with simpler syntax without braces and semicolons and commas and with easier macros and..."

* some simple garbage collected extension language for runtime/dynamic stuff. Still statically typed though.


Well, on a Symbolics Lisp Machine you could run Ada, C, Pascal, Fortran and some other exotic stuff. Probably there was an ML for it and I guess you could bring up an early Haskell compiler (Yale Haskell) on it.


I'm pretty sure Prolog was there too. If I'm not mistaken Symbolics C had safe pointer arithmetic and a garbage collector.


Prolog was one of the main language offerings, though it was not statically typed. ;-)


> Rust, but better, and with simpler syntax without braces and semicolons and commas and with easier macros and...

Makes me wonder: could there be a fully statically typed variant of Lisp? Did anyone try that?


>Makes me wonder: could there be a fully statically typed variant of Lisp? Did anyone try that?

FYI: Lisp already allows you to specify data types if you want, as if it were a statically typed language. If you do this, a good Lisp implementation, like SBCL, will then do static type checks and will also improve execution speed significantly -- with this plus other tricks, Lisp can approach Fortran and C speed.


Well, there is Shen. Static typing, I think it can do dependent typing... Doesn't have the affine types of Rust tho. Not sure how it works out to use in practice.

Discussed here, for example: https://news.ycombinator.com/item?id=9297665


There is Typed Racket which has been under substantial development for many months now, with good results.

https://docs.racket-lang.org/ts-guide/


There's https://github.com/murarth/ketos which is not exactly that but related


With type inference, extra speed of static typing (no run-time type checks), and the convenience of dynamic typing (no type declarations).


It happened (for a while) with Dylan on early, unshipped versions of the Apple Newton. Dylan was an object-oriented variant of Scheme, and there was an OS that was implemented nearly all the way to the metal in it.

At least that's what I heard; I was working on the C++ based Newton OS, and sat next to the Dylan guys for a while, until they were told to stop work on their stuff. So I could have the level of "metalness" wrong.


> Dylan was an object-oriented variant of Scheme

Not really. Dylan was a stripped-down version of Common Lisp (which has an object system built in) with some of the fringier bits (like :before, :after, and :around methods) and the parentheses stripped off.


Nah, it was Scheme plus (a stripped-down) CLOS. A possible point of confusion was that the development environment, including the CPS Scheme compoiler, was written in Common Lisp. Also, all values were instances of (mini-CLOS) classes.


Wow. Did it have call/cc? No special variables? I would have sworn it was based on CL when I played with it back when I was a customer while you guys were building it in MCL. But I was mostly using the object system.


The Dylan team consciously chose not to provide call/cc. It had a restricted form of call/cc called "bind-exit" that supported only upward continuations.

It didn't have special variables, but it did have module variables, which were lexically accessible from anywhere inside the module where they were defined, or in any module that imported them. THey were more like entries in Smalltalk dictionaries than special variables (a possibly-overlooked point is that the initial design discussions at Apple included several Smalltalk enthusiasts from ATG).

The object system would definitely have looked like Common Lisp. It was CLOS minus some of the bells and whistles, as you observed.

Also, like Scheme and unlike Common Lisp, Dylan was a Lisp-1, not a Lisp-2.

For those unfamiliar with that jargon, it refers to how namespaces are organized in Lisps. In Dylan, as in Scheme, there was a single namespace for variables, functions, types, and so on. For example, <collection> was the abstract superclass of all collections, but the name "<collection>" was just a read-only module variable that happened to refer to the (anonymous) class.

By contrast, in Common Lisp there are separate namespaces for variables, functions, classes, and some other things. Thus, in Common Lisp (but not in a Lisp-1), you can have a function named "address", a class named "address", and a lexical variable named "address", and they don't collide or shadow one another.


The Newton didn't have hardware designed to facilitate running Dylan, though, so it seems a somewhat lesser level of metalness.


The MMU on the ARM 610 had features intended for Dylan, that were designed by one of my ex-cow-orkers; the sub-page protection system (1K granularity) was put there specifically to improve the garbage collection behavior of the system. (The sub-page protections allowed better than 4K granularity of physical page sharing, which helped a lot, since RAM on the newt was always precious).

The original MessagePad might not have had enough memory to run a serious Dylan environment. But unshipped versions of the tablet newt ("Senior") did have enough, and my understanding is that they did, at least for a while.


I worked on bauhaus, the second Dylan-based Newton OS--that is, the one developed in parallel with the C++/Newtonscript one. It used the C++ microkernel and it used the 7 low-level bottleneck functions from C QuickDraw. Everything else was written in Dylan.

I'm not sure whether it would fit on Junior; it might not. It was about half a megabyte. I don't remember how much room Junior had. I ran it daily, though, on a Senior prototype.

It wasn't a Lisp machine in the usual sense, though, and not just because it didn't have hardware tags bits and so forth. The development environment didn't run on Newton hardware; it was a heavily-customized version of Macintosh Common Lisp called Leibniz which ran on Mac hadware. Our Newton hardware was ribbon-cabled to the Macs' Nubus slots. Leibniz had both Common Lisp and Dylan development environments in the same runtime image, complete with text editors and listener windows for Lisp and for Dylan. We used Common Lisp code to manage and customize the development environment, and we used Dylan code to implement bauhaus OS features.

So I guess, in a sense, the combination of Mac hardware plus Newton hardware plus Leibniz acted sort of like a Lisp machine, but less featureful.


Yes perhaps I should have said something stronger than merely 'facilitate' but 'MMU parameters were better for GC' is a long way from the Lisp machine hardware/software mindmeld. It's still, as far as I can tell, pretty much just a MMU, right?


I think the main point was to have a more interactivity friendly GC than otherwise possible. Having the user notice GC pauses on an interactive computer - a tablet - would have been a problem. MCL provided an ephemeral GC using the MMU of the 68020 (external) or 68030/40 (integrated). It dealt with short-lived objects in main memory, avoiding virtual memory activity. A similar level of responsiveness probably was the goal.


On the other hand, the newt (Senior, anyway) was running Dylan on bare metal; you don't need to have fancy-pants LISP-accelerating hardware to do that. You just need to have the intestinal fortitude to write an OS and runtime in something not C :-)


So I know this isn't what you are talking about, but you might find this interesting in terms of a machine that you interact with using lisp. It is basically a very small computer using an Arduino chip as the processor:

http://www.technoblogy.com/show?1INT


You should examine why you dream of this; what is the attraction?

This isn't my itch, but some possible answers could include having hardware enforced type safety and architectural support for efficient execution. To give a rather extreme example of what is possible with a dedicated architecture, study the Reduceron [1,2]. IMO, doing something similar for Lisp would be much much easier.

Could it be done? Yes, absolutely and it would be great fun. You'd have to work with an FPGA though unless you have a sizable fortune to fab a chip (though 28nm is almost affordable).

[1] https://www.cs.york.ac.uk/fp/reduceron [2] https://github.com/tommythorn/Reduceron

PS: Reduceron has a hardware garbage collector


https://loomcom.com/genera/genera-install.html

Was gonna try this but haven't yet.

After it's up, then what?


Most of the links on this page are broken so here are the archived versions (thanks Internet Archive!):

The CONS protoype (AI Memo 444):

https://web.archive.org/web/20060211063312/http://home.comca...

The CADR (AI Memo 528):

https://web.archive.org/web/20050411174516/http://home.comca...

The Symbolics lisp machine museum:

https://web.archive.org/web/20000605210626/http://kogs-www.i...

After-the-demise symbolics info:

https://web.archive.org/web/20060101010104/http://www.abstra...

P. Tucker Withington's contemporary thoughts:

https://web.archive.org/web/20040810204319/http://www2.theci...

Lisp family tree:

https://web.archive.org/web/20060202000304/http://community....



About a year ago or so I was bitten by the Lisp and Smalltalk bugs, and since then I've become very interested in Lisp machines, particularly Symbolics machines running the Genera operating system. I'm in my late 20's and thus I wasn't around when Lisp machines were in their heyday; however, I believe that it's sad how the proverbial baby (Lisp OSes such as Genera) was thrown out with the bathwater (Lisp machines) when Lisp machines were gradually replaced with Unix workstations in the late 1980s and early 1990s. It would have been nice had Genera or another Lisp OS been ported to the x86 instead of Lisp users having to switch to Unix. Sometimes I even dream of an alternative history where RMS embarked on a GNU-licensed Lisp operating system instead of GNU.

I wonder what type of legal challenges are inherent in making Genera open source? My limited understanding of the situation is that the IP of Symbolics is held up in an odd manner due to the dissolution of the company, but I don't know all of the details. It would be nice if Genera were open source and ported to the x86-64. Barring that, a free software clone of Genera would be nice, but it may be a substantial implementation effort.


John C. Mallery owns the IP of Symbolics right now, as far as I know. Someone mentioned that he had plans to open source it, but it was a long time ago.


With my limited (or next to none) knowledge of software copyrights, I am wondering how come the IP of a company ends up belonging to a single person. Is that the case?


Genera was using more than 32 bits in each word so the Alpha was the only available target at the time for a software emulation of the system.

It would have been easier to create a port of the LMI or TI systems to a workstation or 386 PC, they only ever used a 32 bit word. TI would have been the obvious candidate as they were building 68020 systems that used the same case as the Explorer. Add a simple external MMU to the 68020 and a native compiler and it would be reasonably fast.

I guess that LMI didn't have enough staff to look at something like this back then. There is a new Lambda emulator if anyone wants to play with it [1].

Xerox did port their software to workstations and I believe it is still sold by Venue.

[1] https://github.com/dseagrav/ld


If you read the original GNU manifesto, RMS was hoping to sucker the Unix community into slowly moving towards something very Lisp Machine-like.

https://www.gnu.org/gnu/manifesto.en.html


From a bit of searching:

Setup: https://web.archive.org/web/20170630125155/http://www.advoga...

Manuals: http://www.bitsavers.org/pdf/symbolics/software/

Code: https://archive.org/details/SymblicsOpenGenera

I learned OO programming on a Symbolics 3600 using ZetaLisp+Flavors around 1987. It was a cool system (when it did not have hardware issues). I used a Symbolics to implement perhaps the first simulation of kinematic self-replicating robots (and accidentally invented robot cannibals -- another story as a cautionary tale...)

Genera may be interesting to play with for fun and learning -- but Smalltalk is much more of a living thing with ongoing communities around Squeak, Pharo, Amber, and more.

I liked Genera but I loved Smalltalk (first with ObjectWorks/VisualWorks and then other variants). After using those two environments, pretty much everything since has been disappointing and felt like big steps backward.

(Well, the Newton OS was cool too including with "Soups"... Sad Apple killed the Newton OS just as the MP2100 with the StrongARM was good enough hardware to run it well... I also liked HyperCard which Apple also killed... And I also learned Forth before any of those and it was cool within its niche...)

Still, when I squint, I can kind of see a huge distributed Smalltalk-ish/Lisp-ish/Newton-ish/HyperCard-ish/Forth-ish system spread across all the networked browsers out there processing JavaScript/HTML/CSS. So, that's why I do JavaScript stuff right now -- and try to build an thinking/coding environment for the web inspired by those sorts of ideals (and others); some work in progress: https://github.com/pdfernhout/Twirlip7

I tried that before with Python and also Jython about a decade ago, but did not quite get it to be as interactive: http://patapata.sourceforge.net/critique.html

But using JavaScript seems more likely to succeed somehow than Python all things considered (warts and all) -- especially when you consider JavaScript's ubiquity. I figured if Dan Ingalls could swallow his pride and use JavaScript for the Lively Kernel, I could give it a try too (but approaching it from a different direction of more native DOM interactions via Mithril.js and Tachyons).


Genera got an emulator for the DEC Alpha and Xerox Interlisp-D got an emulator for SPARC and x86. Those were commercial products.


I programmed almost all of those machines at one point, both Interlisp flavor (dandelion -- wrote custom microcode for it), dolphin, Dorado; MIT CADR, Symbolics CADR, TI (LMI) CADR, various 36XXs (had two 3650s in my office at one point, used them both), but moved on before the ivory Mac plug in came along. It was a nice spur on the computing road, but ultimately not worth it. The tagging ideals still survive, e.g. in the RISC-V.

Sadly though the philosophy that everything should be programmable has mostly been lost, though partially revived in the browser. But only partially.


> The tagging ideals still survive, e.g. in the RISC-V.

I'm not sure what you are thinking of, but there's no such thing in RISC-V. Possibly you are referring the the goals of lowRISC, which will be RISC-V based but have various extensions.


Yes, that is what I was thinking of.


I own a quadra Mac ivory (I did just partially want to gloat). Genera is definitely worth the hype. The "present as" system in clim is amazing. Another thing is the cool app icon that is a combination of the famous cons cell but with cdr pointing to an Apple icon! I will hopefully be making some more videos of it shortly. One thing I discovered alread is there was a genera HyperCard API - Imagine the world we could have had!

I did ask the fellow from genera about disaster recovery (failed or borked hdd) and he said as I am a license holder he can send me cds or send the hdd and have him image it. The full source for genera is part of the image so tons of interactive docs and source to explore!


When people find out I design Smalltalk computers, they often say something like "oh! You probably don't know this, but in the 1980s some people tried to build a computer to run Lisp and it failed completely". My reply is that every computer except two (PC clones and the IBM 360) from back then failed completely. So it doesn't make sense to claim being language specific was the cause of its demise.


In the 1980's, people also tried to build video phones and failed. Let's not try today.


Where can we get one?


Do you mean where you can get a Smalltalk computer? I still haven't finished though I have been working on it for a long time. But I am getting pretty close so it should be possible to get one in a few months.

Pictures of my previous efforts: http://www.smalltalk.org.br/fotos/ (with 68000 and ARM processor) http://merlintec.com/swiki/hardware/28.html (with my own processor starting in 1999)


Yes, a Smalltalk computer. Very nice. Are you working with SiliconSqueak? I can never tell what the status of that project is. I would definitely pick one up / build one.


Yes, SiliconSqueak is what I have been working on since 2009. It is actually based on an idea I had in 2004 but I didn't work on because I couldn't get the Squeak community interested. The latest description is from early 2015 and is missing a lot of stuff:

http://www.merlintec.com/download/jecel_phd_deposited.pdf

Currently I am replacing the ALU Matrix coprocessor described there with an alternative that is more generally useful. I will move the text somewhere public (github, for example) so people can always access the lasted version.

Back to the original topic, it is interesting to discuss what good would a Lisp Machine (or Smalltalk computer) be in 2017? Would it run the language faster than a x86 processor? Lisp Machines tried to be the best possible interpreter, but we now have advanced adaptive compilation technology. A very parallel Lisp could be really fast in a dedicated architecture (see GPUs and similar) but most people are interested in normal, sequential Lisp. In that case it might still have a niche if it could perform as well (or nearly as well) but with far fewer transistors (lower power and lower cost).

In short: a 2017 Lisp Machine should either go beyond the current Von Neumann model or it should do what you can do with Lisp on normal machines but cheaper and with less energy.


A reference from the late 80s, "Principles of Artificial Intelligence and Expert Systems Development" 1 Jan 1988, David W. Rolston

https://images-na.ssl-images-amazon.com/images/I/51rCBcebCBL...


The page hasn't been updated since 2008 and is full of broken links.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: