It was by far the most productive environment I ever used. I was the architect of the lower levels of cyc (Guha the higher levels) and in a short time I was able to write enormous amounts of code that was used for years. There aren’t really other exploratory development environments like that any more. I actually switched the team from D machines to symbolics machines (I had previously worked on D machines at PARC so it wasn’t a familiarity thing).
As for speed, cost and reliability: I had two such machines (two computer tables in my office), one with a color screen, each with a massive 8 MW of ram (40 MB). And MCC had an on site symbolics tech who was conveniently across the hall from me. Not really a scalable approach, but this relates to Dan Luu’s post that is currently also on the front page). It was absurdly slow by today’s standards, but not by the standards of the era.
Now I do all my development on a MacBook Air. And as with the lispm, I mostly use it in full screen mode.
It still boggles the mind to see how the universe dilutes things people consider the best in class. Age made me a little more rounded and stoic (economics, management, culture, paradigm.. lots of factors at play). But still it's weird that value evaporates.
Cyc is legendary. I wouldn't be surprised to see some recurring HN posts on it. Maybe especially for the benefit of people first arriving in the era of DNNs, but also the rest of us.
I’d be interested in modern perspectives on Cyc and Eurisko. Hopefully I’ll be forgiven for this impression, but I’ve only heard of Cyc being legendary for basically wasting Lenat’s career after the promising results from Eurisko. It’d be an interesting debate, whether recent advancements like GPT-3 confirmed or refuted the intuition that architecture isn’t enough, we need to encode knowledge. Feels like we have settled on the idea that a good enough architecture (and enough compute time) can take care of the latter. But it’d still be valid to claim GPT-3 lacks common sense.
I still think Eurisko is fascinating. Stochastic, heuristic systems seemed to reach their highest point with random forests and subsequent gradient boosting approaches. But we’re still obsessed with quite static models over living, learning agents.
Note that Doug later wrote a paper “Why AM and Eurisko appear to work” (emphasis mine).
People are still using semantic KBs like OpenCyc for research.
> …confirmed or refuted the intuition that architecture isn’t enough, [that] we need to encode knowledge
For my position on this: I left the project for a number of reasons, but one fundamental one was that I perceived the corpus to be largely pointless (note the countervailing position above, in that people still get value from them today ).
Architecture alone is clearly inadequate as can be seen from the fragility / inadequacy of current systems, yes, including GPT-n. They remind me of things like the X-15: providing valuable data but ultimately not on the path to space flight.
Knowledge acquisition is ad hoc, and more importantly the graph connections are highly as hoc, but dependent on architecture and prior state. I no longer believe (perhaps I never did) that you can meaningfully have one and not the other.
After leaving Cyc I went into a different direction (e.g. Cygnus) and only returned to symbolic AI a few years ago working from a constructivist model. Unfortunately I had to step away from that work last year due to a more pressing problem. But I think it’s ultimately a more powerful approach.
I'm wondering if there have been any efforts to create a FOSS clone of the Genera operating system for commodity architectures? I was born during the beginning of the 1980's AI winter, and while I've heard a lot of amazing things about Symbolics from this website and other places (especially the development environment), I've never used a Symbolics Lisp machine, and unfortunately I lack the money and the space to own a Symbolics Lisp machine, and sadly Genera is still proprietary. A Genera clone would make this technology much more accessible to those who either can't afford Symbolics Lisp machines or who don't want to pirate Open Genera.
Thanks for sharing this detailed post. Looking forward to the next visionary project in creating a new environment for developers, if there will ever be another one pushing the state of the art further. I guess we could now build workstations with 2 TB RAM and 10 TB SSD, but what is the wish list on the software side beyond getting back to past sophistication?
Has anyone written a side-by-side comparison of modern (e.g. Intelli-J) environments versus Symbolics or other LISP machines as a research paper?
Have you used it extensively by chance? I was recently exploring Pharo with "Pharo by Example", but found it really hard to approach because the UI seems to change every major version (with things like the System Browser changing names). Squeak on the other hand was a lot smoother to get started with. I'm also not really sure how to best compare Squeak and Pharo.
Just scratched the surface, no extensive experience with it. Python is what I use at work, but I miss the Pharo environment a lot. It was forked from Squeak, and indeed still evolves at a fast pace. I think one of the reasons for the fork + name change was to give room for breaking changes from the Smalltalk roots. But it's still mostly compatible.
Other then the obvious (not being easily able to see / change the code of the OS, FS, GUI, networking stack and the editor using its own, pre-CL Lisp dialect), what's missing from a Linux/MacOS + Emacs + Slime + SBCL/CmuCL/CCL environment? Does it really make a difference to an application developer? How do the offerings of Allegro or LispWorks compare?
Well, if you made one, it would be much better than the original. We had one refrigerator-sized Symbolics machine at the aerospace company, during the false AI boom of the 1980s. I used it a few times, but it had one LISP fan who used it a lot for a specific project. It broke down a lot, and Symbolics service was very poor. The early versions had a really slow garbage collector. Minutes. With everything in one address space, you had to wait out the GC; you couldn't kill your program and start over. Many of those special-purpose buttons on the Space Cadet-derived keyboard didn't actually do anything. Unless, of course, someone rebound them in EMACS.
I did LISP work then, but mostly using Franz LISP on VAXen and Sun Workstations.
I was never really into the LISP cult, though I wrote a lot of LISP. Franz LISP could compile to .o files, but you couldn't just link them and make an executable. You had to load them back into the LISP environment. I asked the Franz people why they didn't package up the run time so that you could just link the thing and make an executable without debug break and interpreter capability. They were puzzled at the question. Delivering a finished product was totally alien to them.
Eventually, hard-compiled LISP on microprocessors became faster than custom LISP hardware, and LISP machines went away. And, eventually, other languages got enough dynamism that LISP was no longer needed.
It's one of those things which was better in retrospect.
I worked for about four years on Symbolics machines in the AI summer of the mid 80's. We were, perhaps, lucky in that I don't remember them breaking down at all, and our service guy was competent and entertaining.
I remember when the generational GC was released. Before that, we just used to reboot at the end of the day... but the generational GC just worked, and we didn't have to worry about memory (so much). Our rep installed the GC thermometer (that showed in the overscan area so you didn't lose any screen real estate), and that really showed the difference.
And, by the way, it was Zmacs, not Emacs on the Symbolics.
Refrigerator size suggests to me one of the very early models, which in fact could have been much more problematic than any of the latter. I seem to recall being warned in case of "free to good home" 3600 about how delicate it was.
Yes, the Symbolics 3600. In the later models, Symbolics apparently got their hardware act together, but I never used any of those. There was a period in early workstations where we had one or two of everything, and finally settled on Sun workstations. The early 1980s were a time of great hardware creativity, and many dead ends.
I had a 3630 which was a somewhat wide deskside tower, no louder than a deskside Sun 3. I never had any service problems other than a somewhat dubious tape drive.
3630 was second generation design, a so-called "G-machine" (vs. "L-machine" of previous models), and I believe it already shipped with Ephemeral GC from start.
I was advised that if possible, grab the boards necessary to upgrade 3600 to 3670 to have much easier life ;)
From a historical interest perspective, I would love to see some Symbolics simulators.
From a productivity and ergonomics perspective, I don't think chip level simulation would be the way two go. First, Genera has been ported forward to new system. First OpenGenera was released for Tru/64 on the DEC Alpha platform as an official product. The fastest Symbolics hardware was about 6 times faster than the original Symbolics hardware. OpenGenera on a 533mhz Alpha was about 3 times faster still. There is a project called VLM to let you run OpenGenera on x86 or ARM and I would imagine it is many times faster still. Also, to my understanding, OpenGenera can still be purchased from symbolics-dks.com for a heft fee.
Second, I think that productivity will be limited by how much it is stuck in the past. If nothing else, crypto being behind the times seems likely to be an issue with networking with the rest of the world.
I would love to see a more modern system like OpenGenera. I also would love to see Symbolics style keyboards be made and possibly upgraded with things like better keyswitches and USB. That should be doable for someone who wants to make a custom layout, circuit board for that layout, and keycaps but based on QMK compatible hardware.
This is also an emulator. Interlisp uses an emulator written in C and output goes to X11.
Open Genera and Portable Genera use an emulator written in assembler (running on DEC Alpha, X86-64 and ARM64 - for the CPU emulation) and output goes to X11.
As far as I understood it, at first people used the Tru64 emulated Open Genera under Alpha emulation running on some older Ubuntu running on X86-64. That was years ago. Maybe that got some refinements. Is the portable Genera directly retargeted to X86-64 now? And does it matter if it's practically unavailable?
edit: OFC i know Interlisp-D/Medley is also an emulation. But directly retargeted, and legally open source. And not 'production ready', for now. But seems to be only a matter of time.
Depends who that 'us' is. If 'us' is a typical Lisp user, then I doubt that any of the older systems is actually 'interesting' beyond software archeology. I think this 'us' better uses something like SBCL (+ whatever), instead of the old systems, available or not. Or Racket, CLojure, or whatever new language has enough mindshare to create an eco-system.
While that is a pragmatic point of view I can perfectly understand, this doesn't satisfy the urge to bask in the nimbus of the 'awesome integration', and thereby the gained power & speed of doing whatever ones whims compel one to dabble within such environments.
Without having to break the bank, or being otherwise artificially constrained.
Just making sure to innocent readers here: the stuff is really old and it shows. The software was developed, designed and grown in another time. Understanding the 'awesome integration' or the 'power & speed' isn't easy. One can look at that stuff for a long time and still have no idea why these features are there, how to use them in actual programming and how they were hacked into the big pile of mud one has just downloaded from the Internet. The Lisp software was designed for customers with deep pockets to get development & maintenance done. Once the money went away, this stuff died. Some early and some slightly later. It's like finding another old Egyptian pyramid and thinking 'let's clean it and use it again'. Unfortunately the builders and original users are mostly no longer there.
LMI Lispm Software: old, license unclear. TI Lispm software: leaked, uncomplete, no license.
Symbolics Lispm software: partly leaked, no license, owner exists, commercially available. Has a Common Lisp implementation. Best Lispm software. Most development stopped mid 90s. Some updates in the last years, due to a new emulator. Emulator was a commercial product.
Interlisp/Medley: open sourced after a long time being not available. A Common Lisp version which very few people ever used. Emulator was a commercial product. The software is largely written in Interlisp, a Lisp dialect which has been mostly dead for 35 years with almost no users, no libraries, ...
All of these have some very cool technology, but much of it is between 50 and 30 years old.
It's not the Lisp for the 'rest of us', it's for hobby software archeologists, for the few people who have capabilities to update them and for the <100 actual users/developers using them for work.
Thank you for the explanation. But initially I posted that in answer to https://news.ycombinator.com/item?id=29238005 where someone asked about if it would make sense to clone/reimplement somthing like Genera.
With the intent to not having to start from scratch, not necessarily using it as is, but to take inspiration from.
Or not, and really pushing it forward instead, but that depends on so many factors. However, it is possible.
Besides that, did you really have the impression 'innocent people' had to be warned of the consequences, wasted time, whatever? ;-)
Furthermore, either Common Lisp is Common Lisp, or not. So what does it matter, if only few people ever used that?
As soon as it's getting graphical there is nothing besides the two commercial vendors(so proprietary), and old McCLIM.
So who really cares? There is Movitz and Mezzano, why should there be no effort to make Medley modern?
Would McCLIM on steroids be better for SBCL and Clozure?
Who knows?
I for my part like to look deep into such things, to see how I can adapt the concepts(not necessarily code) into my dabblings on FPGAs.
Sometimes I had my mind blown, by things I haven't seen before, didn't even know the possibility thereof. Learning new concepts, applying them.
Anyway, I like to go back into the roads not taken, to at least partially re-branch from there. There is much to learn from. From that what we have now, not so much.
> Furthermore, either Common Lisp is Common Lisp, or not.
Not really. Currently the most Common Lisp is SBCL. There is software out there and the probability of being able to run it on a Lisp implementation depends on its compatibility (and the effort that has been taken to make it compatible). Medley is very different from most other Lisps, even from other Lisp Machines, so it would be interesting how get software in and out of it, make it compatible, etc. Actual software depends on more than just 'Common Lisp' (which itself has a lot of differences between implementations): file system, I/O system, characters, networking, threading, FFI, TCO, ...
Lisp Machines have a very different operating system and very different development environments. Lots of basic assumptions don't apply there.
> why should there be no effort to make Medley modern
There is already some. We'll see what it brings.
> Anyway, I like to go back into the roads not taken, to at least partially re-branch from there. There is much to learn from. From that what we have now, not so much.
Even though the past stuff interest me (and I remember a time, when there were active Lisp Machine users around), I learn a lot from stuff we have now, too.
> First OpenGenera was released for Tru/64 on the DEC Alpha platform as an official product. The fastest Symbolics hardware was about 6 times faster than the original Symbolics hardware. OpenGenera on a 533mhz Alpha was about 3 times faster still.
Open Genera was soft emulation of the Symbolics Ivory hardware on Alpha. I believe they didn't even do anything fancy like dynarec; which is why it was such a doddle to port it (from binaries?!) to amd64.
> Second, I think that productivity will be limited by how much it is stuck in the past. If nothing else, crypto being behind the times seems likely to be an issue with networking with the rest of the world.
A modern developer running Visual Studio Code has most of the features the Lisp Machines had, and then some. Edit and continue can even be had under Java and C#.
The Symbolics software is still proprietary, I think. The MIT CADR software is available and can run under emulation, but last I heard it was not all the way there. I'm sure some people here know more about it than I do.
The MIT CADR software can also run on an FPGA. The main problem with it is that we only have a saved image from a pre-Common Lisp version, there are sources and compiled files for a later version that does implement Common Lisp as well but so far we have not worked out a way to get an updated image.
There is an emulator for the LMI Lambda hardware that can run an improved version of the final LMI software.
I once spoke to David once asking if they would open source the UI for posterity and he politely replied in the negative. There are a few MacIvory installations left floating around so they might have a support contract stream (I'm guessing) although I'm not sure what other possible income they hope will be coming.
The preservation part is becoming most acute as there will soon be zero hardware and zero people who can get this into digital museum status.
I think that the best way to get a similar experience would be to grab SBCL and write a high-level virtual machine on top of it that serves as a common GUI, application framework, editor, database, and provides other OS-related functions to make the developer experience 100% within Lisp. No Emacs or anything else from the outside, everything from audio over the graphics and the monitor sizes to networking needs to be abstracted away from the real machine. For example, every GUI and graphics feature needs to be freely scalable and resolution-invariant, based on virtual screens with their own coordinate systems. Aspect ratio is a problem, though, it probably needs to be taken from the real machine.
Anyone who wants to develop such a thing would have to take care to also abstract away from common mechanisms that are not very useful. For example, there seems to be absolutely no need to speak about files in such a system except for legacy support with the outside world. Likewise, standard network access should be on a self-configuring virtual p2p network whereas normal networking is a legacy system to the outside world.
Well, that's just my idea of a good Lisp machine. I don't mind if it runs on Linux as long as it doesn't feel like it. On the contrary, having such a large ecosystem underneath would allow the VM developers to integrate existing technologies freely into the VM. All it takes is to write a Lisp abstraction layer that deliberately prohibits any direct access to the underlying non-Lispy implementation. (If you grant too much access to the outside from the VM, then you destroy the VM's purpose - guaranteed interoperability, full reflection, and interactivity.)
However, I don't see such a truly interactive environment to happen because it's a lot of work.
Yes, I know these but my suggestion was pretty much the opposite, to use Linux as a basis and abstract away from it entirely. The result is better and much more usable. That's just my opinion, of course, and ymmv.
McCLIM runs on Linux, but it has tools like debugger, editor, listener and others written on top of it. Applications are possible. The UI is similar to that what a Symbolics had. It can serve as the UI part of a larger solution like you mentioned, where some services are provided by Linux, or tools on top of Unix (database, etc).
This is specifically a site about the symbolics machines, so that’s not a huge surprise but neither PARC nor MIT were at all insular back then and you can see the influence of the interactive MACLISP environment in the Smalltalk (and Interlisp-D) work, and the Smalltalk/Interlisp-D approach on the CADR.
As for speed, cost and reliability: I had two such machines (two computer tables in my office), one with a color screen, each with a massive 8 MW of ram (40 MB). And MCC had an on site symbolics tech who was conveniently across the hall from me. Not really a scalable approach, but this relates to Dan Luu’s post that is currently also on the front page). It was absurdly slow by today’s standards, but not by the standards of the era.
Now I do all my development on a MacBook Air. And as with the lispm, I mostly use it in full screen mode.