I used a Xerox 1108 Lisp Machine from about 1982 through 1989 - a great developer experience, and not a bad platform to sell software products on.
That said, I encourage people who want to hack/learn Lisp to stick with one of the modern setups like: Common Lisp (with Emacs or a complete free IDE setup like ClozureCL), Clojure (with Emacs or other IDE), Racket, or, ...., etc.
I would much rather see people spend time learning a Lisp language rather than fiddling with very old environments.
While I agree with you in principle on not "fiddling with very old environments", I had to try out Genera after hearing from two separate people that Lisp Machines were the best computers they ever used. One guy was a VP at Yahoo. The other guy is a core contributor to the Java VM.
Both of these guys are hardcore Emacs users, running Mac OS X, and have in-depth knowledge of POSIX. It was hard for me to write them off as people who just didn't understand how modern computers work.
Fiddling with Genera has been a very worthwhile endeavor because of how much I've learned from it.
What have I learned? Well, so far I've learned that Genera is basically a case study showing that Richard Stallman's fears were actually well founded. I also learned a tremendous amount about an important but obscure part of the history of computing, a history that I think is actually a vision of what our future is.
So, yes, by all means, hack on and learn on one of the modern setups that Mark suggests above. Once you've done that, look me up and I'd be more than happy to give you a tour of Genera on my MacIvory.
> What have I learned? Well, so far I've learned that Genera is basically a case study showing what Richard Stallman's fears were actually well founded.
Sure! This is actually a good reminder that I should write something more in-depth on this topic, since most of what I think I know is based in large parts on oral-history with some conjecture.
So, here is where the history of Genera is non-existent or murky. Yes, you can download a torrent of Genera. But how do you obtain a legal license Genera? Who actually owns the IP to Genera?
In learning the answers to those questions, I was left with even more respect for RMS and an amusing, if not ironic, anecdote showing how his vision for the future turned out to be correct.
> How do you obtain a legal license Genera?
You purchase a copy of Open Genera for the DEC Alpha for $5000 from David Schmidt.
> Who actually owns the IP to Genera?
John Mallery (http://www.csail.mit.edu/user/926). He's the most recent owner. Before he got the IP, it was owned by a series of law firms and ex-Symbolics employees.
Why do I find this this amusing? Well, the software that RMS worked so hard to protect and that ultimately helped "inspire" him to start GNU has been relegated to the footnotes of history. Meanwhile, GNU software is used on millions of machines.
> You purchase a copy of Open Genera for the DEC Alpha for $5000 from David Schmidt.
LOL. Great, anyone got a spare Alpha lying around?
> This Wikipedia entry does a good job at explaining the impact this part of history had on RMS
I'm looking at this line: "Unfortunately this openness would later lead to accusations of intellectual property theft."
That doesn't really capture the acrimony iirc. I was just a youngin' at the time, but I remember overhearing rms get a phone call; I believe it was from someone at the Symbolics legal team. They were trying to explain to him how he had violated something-or-another because he built some LISP feature from scratch.
They went round in circles for a while, finally rms tired of the conversation and ended it. It was a very bizarre conversation for an academic environment like the AI Lab.
Lisp Machines started at Xerox and MIT. The MIT project was started in the mid 70s. Much of the funding for Lisp at MIT came from DARPA (aka ARPA) in the context of enabling technology for modern software for the military. The MIT AI Lab projects in general were largely funded by DARPA. For some information about the later funding see this book: Strategic Computing, DARPA and the Quest for Machine Intelligence, 1983-1993 http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&...
When the research created the first usable prototypes of hardware (the Lisp Machines and other stuff) and software (expert systems, ...) DARPA wanted to commercialize it to create a market which then could serve their needs. So licenses of the Lisp Machine design were sold to LMI, Symbolics and later TI. DARPA financed also the users. Many machines were funded to be bought by university projects. Much of the early Lisp Machines were sold to the SDI project (strategic defense initiative, a pet child of Ronald Reagan in the cold war, the space deployed missile defense system).
Stallman's role in that scenario is relatively tiny. He worked on software and when some of the stuff he was using was about to be commercialized (in the above context), he protested against it. DARPA's mission was not to develop free Lisp software, but to develop battle management systems, logistics software, diagnosis software for complex military equipment, assistents/trainers for fighter pilots, missile guidance software, ...
Stallman fought for free software, but he was working in a government funded lab, where the funders (DARPA) had a very different mission. The 'hacker spirit' at the lab was more of an accident, attracting creative people to develop the next generation of software and hardware. For the military and other government agencies, with commercial spin offs.
Stallman developed a lot of GNU software, but the goal of a new Lisp environment was given up early. For the initial goals see the GNU manifesto: http://www.gnu.org/gnu/manifesto.html
Funny side note: In the AI Lab, the names of the Symbolics machines started as dead rock stars. (Sinatra too, I think.) After they ran out of those, dead movie stars were used for names. RR was not too popular in those parts (he was president at the time), so he was one of the machine names too.
It was all fun and games until some D/ARPA reviewers walked through the machine room and put it together.
I tried to contact him to ask that exact question. He didn't answer my email.
I've thought of just calling him. But I'm intimidated of him to be honest. He wrote the webserver that ran whitehouse.gov during the Clinton administration, I can barely program in Lisp.
I think "ignoring it due to being overworked or misaddressed" is the most likely -- and if it's either, try fedex/ups mailing a physical item (like a book or other gift, or food, or whatever -- something which doesn't fit in an envelope, and which is obviously nice enough that 1) dude feels guilt if he doesn't respond and 2) intermediaries will try to pass it along.
I've used this trick quite successfully ($20-50 items from appropriate gift vendors -- Cabelas for outdoor type people, gourmet food vendors for other people).
>>This is actually a good reminder that I should write something more in-depth on this topic
I would love to read it. Please post it to HN if you ever get to it, I think a lot of people (including me) are interested in the concepts and ideas behind Genera and other Lisp machines.
> I also learned a tremendous amount about an important but obscure part of the history of computing, a history that I think is actually a vision of what our future is.
Reading the rant from John Rose in the preface to THE UNIX-HATERS Handbook [1] was pretty eye-opening to me. The world that John is complaining about is the world that I've spent many years living in, quite happily too!
How is it that we lost the ability to fix the code to a program that crashed, then continue running the program from where it left off? Why aren't all of our tools self-documenting? Etc, etc.
I've spent quite a bit of time wondering why we've "lost" so much. I think that part of the problem is that technology was developed a rate faster than what most people could keep up with. I remember that early Macs had a game to teach people how to use the mouse! Also, before the internet, it was hard for to transmit software and information about it.
So, the way I view the world of technology now is that we haven't "lost" anything, we're just catching up with the past. And doing a better job of it too!
In many ways, I think that the capabilities of "HTML5" (HTML/CSS/JavaScript) are converging on the capabilities of the X Window system. The main difference that I can see is that "HTML5" requires about 2-4 inches of book to understand while the X Windows System required a couple of feet of book to understand.
I'm not sure if my view of us just needing to catch up with the past is correct, but it's been pretty useful to me. Now, instead of bemoaning things that are lost, I look forward to seeing those things again, in a form that is easier to understand. More "pure" if you will.
I'm looking forward to the day when I can edit any part of my OS or applications, live, while they are running. I'm looking forward to being able to fix a program that has crashed and then have it continue running. I enjoy using interactive debuggers and I'm looking forward to seeing them in more languages. I love using REPLs to learn new languages and for doing quick prototyping.
That's why I'm watching JavaScript with interest. Many people are doing things with JavaScript that we were doing with Lisp Machines before. Some people are able to edit their server-side JavaScript live. Many of us know that our web browser has a built-in JavaScript REPL, some of us use that to fix webpages that other people wrote.
In short. After spending a lot of time reading, talking about, and using Lisp Machines, I feel like I have a deeper understanding of what William Gibson meant when he said "The future is already here — it's just not very evenly distributed."
NeWS was architecturally similar to what is now called AJAX, except that NeWS:
- used PostScript code instead of JavaScript for programming.
- used PostScript graphics instead of DHTML/CSS for rendering.
- used PostScript data instead of XML/JSON for data representation.
* a doc browser, which integrates with said editor
* a high performance compiler, which integrates with the above two
* a debugger that can step through OS code as easily as program code
* the ability to stop a program midexecution, change it, and continue from where you stopped.
Python is a script interpreter running on UNIX. Nothing more, nothing less. It's terribly misguided to compare it to a LispM. You don't even know what you're missing.
The basic support for said functionality is all there in Python, minus the operating system (although I'm sure someone could write a Python OS for fun). You just need 3rd-party tools to make use of it.
>It was hard for me to write them off as people who just didn't understand how modern computers work.
I'm glad you didn't fall for that trap. Software engineers constantly make excuses for doing things in idiotic ways. Almost nothing I use really works anymore. It just "kind of" works most of the time. Look at your average web applicaton and the ridiculous resources it takes to get the thing up on the screen and interacting with the user. We've become addicted to high powered machines and finding more complex and inefficient ways of doing the same things.
Absolutely agree. I was playing with a BBC Micro last night, 2Mhz processor and 52k RAM (shadow RAM card fitted, and 16k sideways RAM - this is a beast of a machine!). Then I did some stuff on my 2x2.4Ghz, 4G RAM Mac, and I almost punched the damn thing in frustration, 2000x "faster" and it can't even keep up with my typing!
Well, maybe I'm not in that trap now, but I was for a long time.
> Almost nothing I use really works anymore.
I'm hoping that I can hide from that inside the Emacs monastery. That didn't seem to work for jwz though, so I'm not sure if there's a way to avoid having to update my silly software several times a decade.
JFC! I've been going through this at work today - the developers of a third-paty app designed the app for low utilization companies - our company is a bad fit for this software. We're in the top 500, and our usage patterns expose every scalability flaw in this software...
"I would much rather see people spend time learning a Lisp language rather than fiddling with very old environments."
As a general rule learning is better than repeating, however there is value in those old environments. Primarily for a long time software was getting more complex than hardware could support and so there are a lot of adaptations that were made in 'old environments' to support better performance on under-performant hardware.
As we enter the 'post PC' era and get a wider spread of machine capabilities in the market place, it is always useful to have a few 'tricks' in your pocket for getting better performance out of your system.
Knowing that you got those tricks from systems that are now > 20 years old provides a pretty good patent defense if you get trolled. Especially if you can show that you, being reasonably skilled in the art, learned to do what you did using exemplars that are greater than 20 yrs old. That is a strong case for prior art.
> rather than fiddling with very old environments.
I worked on Symbolics machines (rms, please forgive me). I even fixed a bunch of the wire-wrapped original LMs from before Symbolics.
I remember it being earth-shattering at the time (self-documenting? whoa) but I confess I'm curious how much was just the transition from TOPS-20 etc to a completely new single-user environment. All stuff from the dark ages really.
I'd probably fire it up out of nostalgia if nothing else.
The "Help" button the Symbolics machines blew me away. I've used software with great built-in help. It's impressive to see it system-wide.
Another thing that I still find really impressive is how (aside from the bootloader) all of Genera is written in Lisp and can be edited, live, while the system is running.
A lot of the things that I see people doing with JavaScript feel very familiar. It's exciting to see how the things that people are doing with JavaScript are approaching the capabilities of Genera, but in a way that will be much more accessible to the "kids these days".
I don't see the connection between Lisp machines and JavaScript. JavaScript (in the browser at least) is a heavy-weight C++ system utilising huge amounts of library and operating system code with some weak scripting capabilities a the bottom. What relation does that have to Lisp machines or Smalltalk type environments where everything is built in a simple an transparent manner?
You're right, it's a weak connection right now. I could be wrong, but I'm seeing JavaScript getting pushed "down" the stack. People are doing things like implementing the DOM in pure JavaScript. I'm also very interested in seeing what people are doing with Emscripten, ClojureScript, swank-js, and the like.
Yes, we don't have a Lisp or Smalltalk like environment for JavaScript. Not yet. Given how widely supported JavaScript is though, I could see us getting there organically? Not sure.
lispm, I've tried looking up the link you posted a year or so ago to a gallery of Genera software screenshots, but it doesn't seem to be working (lispm.dyndns.org). Have you moved the gallery somewhere else? Please let me know, I'm very interested in seeing how they look!
Given that Mozilla aren't willing to redesign their browser to eliminate the recurring flaws, I find it implausible that they're going to go for a balls-out JavaScript-all-the-way-down approach. The way it's going now is more C++ code to do specialised tasks with a bit of scripting on the front. Is stuff like Emscripten getting us closer to a half-way decent design? It seems like this is all just enabling more complexity. Now we'll have C code compiled to JavaScript compiled to machine code.
I like Embscripten and so forth for hack value, but this is a terrible way to build systems. I mean, people want sandboxed code in the browser, so why not make the whole browser sandboxed? Because nobody cares enough to put the effort in, I guess. Chrome is the only browser going along these lines and Mozilla is doing its best to hose down that effort because it just might allow people to program for a simple portable VM instead of all this application-parading-as-a-platform web standards stuff.
I wasn't clear. I wasn't meaning to suggest that Chrome were going to move away from C++. But they are trying to sandbox using NaCl and other mechanisms. I was aware of the Rust effort, but not that they were going to implement more of the DOM in JavaScript (thanks!). Maybe there will be more security-by-design.
I also think JavaScript is a bad basis for a platform in any case, since it requires loads of complexity to be fast. Why not just a simple typed language or VM? I have read some JS JIT papers and they have found plenty of code generation bugs (more security holes).
I don't think Chrome is trying to sandbox their own code using NaCl, so much; they're just trying to sandbox "arbitrary executable" code.... which is sandboxed automatically if it's JS running inside a JS VM instead of a random binary blob.
For the rest, one of the main points of Rust and servo is to have better security-by-design. Whether the DOM ends up implemented in JS or in Rust is still up in the air at this point, but either one would be much better than C++ from a security perspective.
As for JavaScript, it's what we have due to happenstance, but displacing it involves either a huge amount more complexity in web browsers (to support JavaScript _and_ another language both touching the same objects and whatnot without memory leaks) or just dropping JS entirely and implementing some other language (not exactly likely to succeed). Maybe someone will create a VM that can run both JS and something else well. Maybe. It's not all that simple to do.
You and I are starting from different assumptions. I want to throw out the DOM. It is an application-specific component that should be implemented in "user land". Currently we're heading toward C inside JS inside Rust, with plugins written in C++ running outside any sandboxes. This is never going to be secure. You have to design a VM that everything can sit in. There's no reason why a browser can't sit inside a simple VM except that it builds in huge amounts of complexity regardless of how its used.
We don't need a VM to run JS "well". Nothing going on in the browser is even CPU intensive if not for the huge gobs of complexity going on. People are making simple things harder and harder to do, and complex things easier and easier. If you just write everything for a simple typed VM you don't have to bend over backward to make things run fast. There's nothing going on in the client side of say, GMail, that I couldn't do (faster!) on the computer I was using in 1997.
>That's not quite true. People do in fact do CPU intensive stuff in browsers, if nothing else because they write algorithmically slow code.
It's CPU intensive because of the way it's done, not because it intrinsically requires much CPU. That was my point. And it's not just algorithms; everything goes through a million layers of abstraction.
>Note that GMail is not an example of an application that really does intensive JS. A photo editing app would be a better example.
Again, I was doing photo-editing years ago with no troubles. You wouldn't even be able to start your JS photo editing app on a 1996 computer. And by focusing on "CPU-intensive" tasks you're missing an essential point, which is that stuff that shouldn't require any CPU does. My system shouldn't pause - ever. We have optimised everything for high throughput on powerful machines. The JVM suffers from exactly this problem. Java is plenty "fast" if you ignore latency.
There's just nothing demanding enough to require that going on. And yet the UI locks up for 1-2 seconds pretty frequently on my $1000 desktop machine with loads of RAM and CPU. It's even worse on my $400 laptop. There's nothing intrinsic in the hardware or the tasks that should cause this to happen - it's the design of the software. It's because the system is too complex and preemptable that this happens.
Oops. In the above I said the system shouldn't pause ever. This is silly of course. What I meant to say is that it shouldn't pause except when hit with hard resource limits. Loading from the disk or performing an expensive calculation will take some time of course. But there shouldn't be random pauses when interacting with something that ought to be fit into memory. Modern systems are full of pre-emption, GC pauses and other non-determinisms.
I consider my time with Genera to be more akin to archeology and code reading. I look through it for ideas and for examples. How did a group of really smart programmers build an environment that they really liked using? What ideas and techniques did they use? Those are the questions I am looking for answers to. When I actually do any Lisp programming for its own sake I use RMCL or something like that.
That said, I encourage people who want to hack/learn Lisp to stick with one of the modern setups like: Common Lisp (with Emacs or a complete free IDE setup like ClozureCL), Clojure (with Emacs or other IDE), Racket, or, ...., etc.
I would much rather see people spend time learning a Lisp language rather than fiddling with very old environments.