Hacker News new | past | comments | ask | show | jobs | submit login
Lisp Machines (patrickcollison.com)
141 points by apgwoz on March 12, 2012 | hide | past | favorite | 80 comments



Kent Pitman wrote a pretty good posting in a comp.lang.lisp thread, in which he described some of the tight integration that made Genera so good: http://groups.google.com/group/comp.lang.lisp/browse_thread/... .

This was one of the things that convinced me to buy one (I have a MacIvory Model II). I've found it a bit difficult to get into, mainly because I don't have the time to spend immersed in it long enough to make the things I've learned stick. I also suspect that a number of the real benefits come from things that aren't immediately apparent, you really need to work with people who know the environment to show you these tricks. I know that's the way I learned Smalltalk - if I'd learned it from books on my own, I'd probably have not realized all the advantages that could be gotten from such an interactive environment.

If anyone can suggest any features like the two that Pitman describes above that I (or anyone else) could investigate, that would be great.


I used a Xerox 1108 Lisp Machine from about 1982 through 1989 - a great developer experience, and not a bad platform to sell software products on.

That said, I encourage people who want to hack/learn Lisp to stick with one of the modern setups like: Common Lisp (with Emacs or a complete free IDE setup like ClozureCL), Clojure (with Emacs or other IDE), Racket, or, ...., etc.

I would much rather see people spend time learning a Lisp language rather than fiddling with very old environments.


While I agree with you in principle on not "fiddling with very old environments", I had to try out Genera after hearing from two separate people that Lisp Machines were the best computers they ever used. One guy was a VP at Yahoo. The other guy is a core contributor to the Java VM.

Both of these guys are hardcore Emacs users, running Mac OS X, and have in-depth knowledge of POSIX. It was hard for me to write them off as people who just didn't understand how modern computers work.

Fiddling with Genera has been a very worthwhile endeavor because of how much I've learned from it.

What have I learned? Well, so far I've learned that Genera is basically a case study showing that Richard Stallman's fears were actually well founded. I also learned a tremendous amount about an important but obscure part of the history of computing, a history that I think is actually a vision of what our future is.

So, yes, by all means, hack on and learn on one of the modern setups that Mark suggests above. Once you've done that, look me up and I'd be more than happy to give you a tour of Genera on my MacIvory.


> What have I learned? Well, so far I've learned that Genera is basically a case study showing what Richard Stallman's fears were actually well founded.

Could you expand on that?


Sure! This is actually a good reminder that I should write something more in-depth on this topic, since most of what I think I know is based in large parts on oral-history with some conjecture.

Lisp Machines started at MIT, some of that code is actually available online now (http://www.heeltoe.com/retro/mit/mit_cadr_lmss.html). That software became the basis of two companies: Symbolics and then later, Lisp Machines Inc (LMI). This Wikipedia entry does a good job at explaining the impact this part of history had on RMS: http://en.wikipedia.org/wiki/Lisp_Machines#Folklore_about_LM...

So, here is where the history of Genera is non-existent or murky. Yes, you can download a torrent of Genera. But how do you obtain a legal license Genera? Who actually owns the IP to Genera?

In learning the answers to those questions, I was left with even more respect for RMS and an amusing, if not ironic, anecdote showing how his vision for the future turned out to be correct.

> How do you obtain a legal license Genera?

You purchase a copy of Open Genera for the DEC Alpha for $5000 from David Schmidt.

> Who actually owns the IP to Genera?

John Mallery (http://www.csail.mit.edu/user/926). He's the most recent owner. Before he got the IP, it was owned by a series of law firms and ex-Symbolics employees.

Why do I find this this amusing? Well, the software that RMS worked so hard to protect and that ultimately helped "inspire" him to start GNU has been relegated to the footnotes of history. Meanwhile, GNU software is used on millions of machines.


> You purchase a copy of Open Genera for the DEC Alpha for $5000 from David Schmidt.

LOL. Great, anyone got a spare Alpha lying around?

> This Wikipedia entry does a good job at explaining the impact this part of history had on RMS

I'm looking at this line: "Unfortunately this openness would later lead to accusations of intellectual property theft."

That doesn't really capture the acrimony iirc. I was just a youngin' at the time, but I remember overhearing rms get a phone call; I believe it was from someone at the Symbolics legal team. They were trying to explain to him how he had violated something-or-another because he built some LISP feature from scratch.

They went round in circles for a while, finally rms tired of the conversation and ended it. It was a very bizarre conversation for an academic environment like the AI Lab.


>LOL. Great, anyone got a spare Alpha lying around?

IIRC, there's a Linux version that owners of Open Genera for the DEC Alpha are allowed to download.


there are Alpha emulators.


So there are. Thanks to you and stray for pointing this out.


Lisp Machines started at Xerox and MIT. The MIT project was started in the mid 70s. Much of the funding for Lisp at MIT came from DARPA (aka ARPA) in the context of enabling technology for modern software for the military. The MIT AI Lab projects in general were largely funded by DARPA. For some information about the later funding see this book: Strategic Computing, DARPA and the Quest for Machine Intelligence, 1983-1993 http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&...

When the research created the first usable prototypes of hardware (the Lisp Machines and other stuff) and software (expert systems, ...) DARPA wanted to commercialize it to create a market which then could serve their needs. So licenses of the Lisp Machine design were sold to LMI, Symbolics and later TI. DARPA financed also the users. Many machines were funded to be bought by university projects. Much of the early Lisp Machines were sold to the SDI project (strategic defense initiative, a pet child of Ronald Reagan in the cold war, the space deployed missile defense system).

Stallman's role in that scenario is relatively tiny. He worked on software and when some of the stuff he was using was about to be commercialized (in the above context), he protested against it. DARPA's mission was not to develop free Lisp software, but to develop battle management systems, logistics software, diagnosis software for complex military equipment, assistents/trainers for fighter pilots, missile guidance software, ...

Stallman fought for free software, but he was working in a government funded lab, where the funders (DARPA) had a very different mission. The 'hacker spirit' at the lab was more of an accident, attracting creative people to develop the next generation of software and hardware. For the military and other government agencies, with commercial spin offs.

As mentioned the SDI initiative was using this technology. But there were several others. One of the biggest wins was DART, http://en.wikipedia.org/wiki/Dynamic_Analysis_and_Replanning...

Stallman developed a lot of GNU software, but the goal of a new Lisp environment was given up early. For the initial goals see the GNU manifesto: http://www.gnu.org/gnu/manifesto.html


> a pet child of Ronald Reagan in the cold war

Funny side note: In the AI Lab, the names of the Symbolics machines started as dead rock stars. (Sinatra too, I think.) After they ran out of those, dead movie stars were used for names. RR was not too popular in those parts (he was president at the time), so he was one of the machine names too.

It was all fun and games until some D/ARPA reviewers walked through the machine room and put it together.


I wonder how much of a community bounty it would take for John Mallery to open source it.


I tried to contact him to ask that exact question. He didn't answer my email.

I've thought of just calling him. But I'm intimidated of him to be honest. He wrote the webserver that ran whitehouse.gov during the Clinton administration, I can barely program in Lisp.


Maybe getting an AI lab person to do it (early Akamaite or maybe a MIT faculty member?) would be helpful.


What is the worst thing that can happen? I know they are called Lisp ninjas, but they are 40-50-60 year old guys with neck beards...


I think "ignoring it due to being overworked or misaddressed" is the most likely -- and if it's either, try fedex/ups mailing a physical item (like a book or other gift, or food, or whatever -- something which doesn't fit in an envelope, and which is obviously nice enough that 1) dude feels guilt if he doesn't respond and 2) intermediaries will try to pass it along.

I've used this trick quite successfully ($20-50 items from appropriate gift vendors -- Cabelas for outdoor type people, gourmet food vendors for other people).

It's like the next-level of conference schwag.


You're probably right. I'll bet I'm over thinking this. I like the idea of sending him something nice. I'll do that.


Are you sure it's John Mallery that's the current owner now? He's not that hard to get a hold of.


No, I'm not sure. That's what David Schmidt told me a year or so ago when I bought my MacIvory from him.

It seems very plausible though, given John's background.


Also, someone should make an AMI of this. Lisp machine by the hour :)


In that case history really does repeat, rather than only rhyme.


>>This is actually a good reminder that I should write something more in-depth on this topic

I would love to read it. Please post it to HN if you ever get to it, I think a lot of people (including me) are interested in the concepts and ideas behind Genera and other Lisp machines.


> I also learned a tremendous amount about an important but obscure part of the history of computing, a history that I think is actually a vision of what our future is.

Could you expand on that as well?


Reading the rant from John Rose in the preface to THE UNIX-HATERS Handbook [1] was pretty eye-opening to me. The world that John is complaining about is the world that I've spent many years living in, quite happily too!

How is it that we lost the ability to fix the code to a program that crashed, then continue running the program from where it left off? Why aren't all of our tools self-documenting? Etc, etc.

I've spent quite a bit of time wondering why we've "lost" so much. I think that part of the problem is that technology was developed a rate faster than what most people could keep up with. I remember that early Macs had a game to teach people how to use the mouse! Also, before the internet, it was hard for to transmit software and information about it.

So, the way I view the world of technology now is that we haven't "lost" anything, we're just catching up with the past. And doing a better job of it too!

In many ways, I think that the capabilities of "HTML5" (HTML/CSS/JavaScript) are converging on the capabilities of the X Window system. The main difference that I can see is that "HTML5" requires about 2-4 inches of book to understand while the X Windows System required a couple of feet of book to understand.

I'm not sure if my view of us just needing to catch up with the past is correct, but it's been pretty useful to me. Now, instead of bemoaning things that are lost, I look forward to seeing those things again, in a form that is easier to understand. More "pure" if you will.

I'm looking forward to the day when I can edit any part of my OS or applications, live, while they are running. I'm looking forward to being able to fix a program that has crashed and then have it continue running. I enjoy using interactive debuggers and I'm looking forward to seeing them in more languages. I love using REPLs to learn new languages and for doing quick prototyping.

That's why I'm watching JavaScript with interest. Many people are doing things with JavaScript that we were doing with Lisp Machines before. Some people are able to edit their server-side JavaScript live. Many of us know that our web browser has a built-in JavaScript REPL, some of us use that to fix webpages that other people wrote.

In short. After spending a lot of time reading, talking about, and using Lisp Machines, I feel like I have a deeper understanding of what William Gibson meant when he said "The future is already here — it's just not very evenly distributed."

Footnote: 1: http://www.cs.washington.edu/homes/weise/preface.html


HTML5 is more analogous to NeWS because it executes on the client http://en.wikipedia.org/wiki/NeWS

    NeWS was architecturally similar to what is now called AJAX, except that NeWS:
    - used PostScript code instead of JavaScript for programming.
    - used PostScript graphics instead of DHTML/CSS for rendering.
    - used PostScript data instead of XML/JSON for data representation.


I suggest trying your hand at Python. It's pretty much all there.


Python does not include:

* a self-documenting editor

* a doc browser, which integrates with said editor

* a high performance compiler, which integrates with the above two

* a debugger that can step through OS code as easily as program code

* the ability to stop a program midexecution, change it, and continue from where you stopped.

Python is a script interpreter running on UNIX. Nothing more, nothing less. It's terribly misguided to compare it to a LispM. You don't even know what you're missing.


The basic support for said functionality is all there in Python, minus the operating system (although I'm sure someone could write a Python OS for fun). You just need 3rd-party tools to make use of it.


>It was hard for me to write them off as people who just didn't understand how modern computers work.

I'm glad you didn't fall for that trap. Software engineers constantly make excuses for doing things in idiotic ways. Almost nothing I use really works anymore. It just "kind of" works most of the time. Look at your average web applicaton and the ridiculous resources it takes to get the thing up on the screen and interacting with the user. We've become addicted to high powered machines and finding more complex and inefficient ways of doing the same things.


Absolutely agree. I was playing with a BBC Micro last night, 2Mhz processor and 52k RAM (shadow RAM card fitted, and 16k sideways RAM - this is a beast of a machine!). Then I did some stuff on my 2x2.4Ghz, 4G RAM Mac, and I almost punched the damn thing in frustration, 2000x "faster" and it can't even keep up with my typing!

We need to take it back to the old skool.


> I'm glad you didn't fall for that trap.

Well, maybe I'm not in that trap now, but I was for a long time.

> Almost nothing I use really works anymore.

I'm hoping that I can hide from that inside the Emacs monastery. That didn't seem to work for jwz though, so I'm not sure if there's a way to avoid having to update my silly software several times a decade.


JFC! I've been going through this at work today - the developers of a third-paty app designed the app for low utilization companies - our company is a bad fit for this software. We're in the top 500, and our usage patterns expose every scalability flaw in this software...


"I would much rather see people spend time learning a Lisp language rather than fiddling with very old environments."

As a general rule learning is better than repeating, however there is value in those old environments. Primarily for a long time software was getting more complex than hardware could support and so there are a lot of adaptations that were made in 'old environments' to support better performance on under-performant hardware.

As we enter the 'post PC' era and get a wider spread of machine capabilities in the market place, it is always useful to have a few 'tricks' in your pocket for getting better performance out of your system.

Knowing that you got those tricks from systems that are now > 20 years old provides a pretty good patent defense if you get trolled. Especially if you can show that you, being reasonably skilled in the art, learned to do what you did using exemplars that are greater than 20 yrs old. That is a strong case for prior art.


> rather than fiddling with very old environments.

I worked on Symbolics machines (rms, please forgive me). I even fixed a bunch of the wire-wrapped original LMs from before Symbolics.

I remember it being earth-shattering at the time (self-documenting? whoa) but I confess I'm curious how much was just the transition from TOPS-20 etc to a completely new single-user environment. All stuff from the dark ages really.

I'd probably fire it up out of nostalgia if nothing else.

BTW: Dan Weinreb's post ("Why did symbolics fail?" linked in the OP) has migrated to here: http://danweinreb.org/blog/why-did-symbolics-fail


I'm 29. The first used Genera a few years ago.

The "Help" button the Symbolics machines blew me away. I've used software with great built-in help. It's impressive to see it system-wide.

Another thing that I still find really impressive is how (aside from the bootloader) all of Genera is written in Lisp and can be edited, live, while the system is running.

A lot of the things that I see people doing with JavaScript feel very familiar. It's exciting to see how the things that people are doing with JavaScript are approaching the capabilities of Genera, but in a way that will be much more accessible to the "kids these days".


I don't see the connection between Lisp machines and JavaScript. JavaScript (in the browser at least) is a heavy-weight C++ system utilising huge amounts of library and operating system code with some weak scripting capabilities a the bottom. What relation does that have to Lisp machines or Smalltalk type environments where everything is built in a simple an transparent manner?


You're right, it's a weak connection right now. I could be wrong, but I'm seeing JavaScript getting pushed "down" the stack. People are doing things like implementing the DOM in pure JavaScript. I'm also very interested in seeing what people are doing with Emscripten, ClojureScript, swank-js, and the like.

Yes, we don't have a Lisp or Smalltalk like environment for JavaScript. Not yet. Given how widely supported JavaScript is though, I could see us getting there organically? Not sure.



lispm, I've tried looking up the link you posted a year or so ago to a gallery of Genera software screenshots, but it doesn't seem to be working (lispm.dyndns.org). Have you moved the gallery somewhere else? Please let me know, I'm very interested in seeing how they look!


Given that Mozilla aren't willing to redesign their browser to eliminate the recurring flaws, I find it implausible that they're going to go for a balls-out JavaScript-all-the-way-down approach. The way it's going now is more C++ code to do specialised tasks with a bit of scripting on the front. Is stuff like Emscripten getting us closer to a half-way decent design? It seems like this is all just enabling more complexity. Now we'll have C code compiled to JavaScript compiled to machine code.

I like Embscripten and so forth for hack value, but this is a terrible way to build systems. I mean, people want sandboxed code in the browser, so why not make the whole browser sandboxed? Because nobody cares enough to put the effort in, I guess. Chrome is the only browser going along these lines and Mozilla is doing its best to hose down that effort because it just might allow people to program for a simple portable VM instead of all this application-parading-as-a-platform web standards stuff.


> Given that Mozilla aren't willing to redesign their browser > to eliminate the recurring flaws

http://www.2ality.com/2012/02/servo.html is worth a read.

> Chrome is the only browser going along these lines

Chrome is using a C++ core with no plans to stop doing that that I know of....


I wasn't clear. I wasn't meaning to suggest that Chrome were going to move away from C++. But they are trying to sandbox using NaCl and other mechanisms. I was aware of the Rust effort, but not that they were going to implement more of the DOM in JavaScript (thanks!). Maybe there will be more security-by-design.

I also think JavaScript is a bad basis for a platform in any case, since it requires loads of complexity to be fast. Why not just a simple typed language or VM? I have read some JS JIT papers and they have found plenty of code generation bugs (more security holes).


I don't think Chrome is trying to sandbox their own code using NaCl, so much; they're just trying to sandbox "arbitrary executable" code.... which is sandboxed automatically if it's JS running inside a JS VM instead of a random binary blob.

For the rest, one of the main points of Rust and servo is to have better security-by-design. Whether the DOM ends up implemented in JS or in Rust is still up in the air at this point, but either one would be much better than C++ from a security perspective.

As for JavaScript, it's what we have due to happenstance, but displacing it involves either a huge amount more complexity in web browsers (to support JavaScript _and_ another language both touching the same objects and whatnot without memory leaks) or just dropping JS entirely and implementing some other language (not exactly likely to succeed). Maybe someone will create a VM that can run both JS and something else well. Maybe. It's not all that simple to do.


You and I are starting from different assumptions. I want to throw out the DOM. It is an application-specific component that should be implemented in "user land". Currently we're heading toward C inside JS inside Rust, with plugins written in C++ running outside any sandboxes. This is never going to be secure. You have to design a VM that everything can sit in. There's no reason why a browser can't sit inside a simple VM except that it builds in huge amounts of complexity regardless of how its used.

We don't need a VM to run JS "well". Nothing going on in the browser is even CPU intensive if not for the huge gobs of complexity going on. People are making simple things harder and harder to do, and complex things easier and easier. If you just write everything for a simple typed VM you don't have to bend over backward to make things run fast. There's nothing going on in the client side of say, GMail, that I couldn't do (faster!) on the computer I was using in 1997.


I think we're heading for something closer to C inside JS inside Rust without C++ plugins allowed, honestly.

     > Nothing going on in the browser is even CPU
     > intensive if not for the huge gobs of complexity going on.
That's not quite true. People do in fact do CPU intensive stuff in browsers, if nothing else because they write algorithmically slow code.

Note that GMail is not an example of an application that really does intensive JS. A photo editing app would be a better example.


>That's not quite true. People do in fact do CPU intensive stuff in browsers, if nothing else because they write algorithmically slow code.

It's CPU intensive because of the way it's done, not because it intrinsically requires much CPU. That was my point. And it's not just algorithms; everything goes through a million layers of abstraction.

>Note that GMail is not an example of an application that really does intensive JS. A photo editing app would be a better example.

Again, I was doing photo-editing years ago with no troubles. You wouldn't even be able to start your JS photo editing app on a 1996 computer. And by focusing on "CPU-intensive" tasks you're missing an essential point, which is that stuff that shouldn't require any CPU does. My system shouldn't pause - ever. We have optimised everything for high throughput on powerful machines. The JVM suffers from exactly this problem. Java is plenty "fast" if you ignore latency.

There's just nothing demanding enough to require that going on. And yet the UI locks up for 1-2 seconds pretty frequently on my $1000 desktop machine with loads of RAM and CPU. It's even worse on my $400 laptop. There's nothing intrinsic in the hardware or the tasks that should cause this to happen - it's the design of the software. It's because the system is too complex and preemptable that this happens.


Oops. In the above I said the system shouldn't pause ever. This is silly of course. What I meant to say is that it shouldn't pause except when hit with hard resource limits. Loading from the disk or performing an expensive calculation will take some time of course. But there shouldn't be random pauses when interacting with something that ought to be fit into memory. Modern systems are full of pre-emption, GC pauses and other non-determinisms.


I consider my time with Genera to be more akin to archeology and code reading. I look through it for ideas and for examples. How did a group of really smart programmers build an environment that they really liked using? What ideas and techniques did they use? Those are the questions I am looking for answers to. When I actually do any Lisp programming for its own sake I use RMCL or something like that.


I ended up purchasing a Symbolic MacIvory because of this blog post. They aren't cheap!

I wrote about my experiences learning to use Genera here: http://genera.posterous.com/ - which reminds me, I need to spend more time with Genera.


What's the advantage of using a MacIvory against using the emulation method in the linked article? Aren't the physical machines slow by today's standards (+cots of maintenance, failed hardware etc)?


The primary advantage for me is that purchasing a MacIvory gave me a legal license to run Genera. This is important to me because I work for Microsoft, a company that pays my salary by selling licenses to software.

I've also been told that a lot of things didn't work or were missing in the version of Genera that people can find online. I can't speak to the validity of that statement though.


It's too bad no one has released a virtual machine with all the configuration steps already done and the system up and running so people can easily try it out.


That is something which looks pretty doable. It would be worth taking a crack at it.


If only hardware research and optimization had gone the way of stack machines instead of register machines. I can only imagine the world of Lisp machines, Forth machines and APL/J Machines that would exist if the hardware more closely matched our expressive languages.


It wasn't stack machines so much as it was RISC and UNIX. The RISC philosophy basically took every instruction not needed for efficiently implemented C and punted it to software, even if it could be efficiently implemented in hardware. Then UNIX took pretty much every machine feature not needed for running C programs and hid them from software.

Take, for example, read and write barriers for GC. On a modern system with virtual memory, each memory access is run through a TLB which has among other things protection and page out bits. That could easily be supplemented with a couple of extra bits to implement GC barriers. While we were getting greedy, we could even add a lightweight trap mechanism of handling the associated faults in user space, at the user's privilege level, to avoid the expense of transitioning into kernel privilege level (indeed Intel and AMD implement all the necessary functionality in their virtualization extensions).


This is very much in line with what Alan Kay says about current chip architectures compared to what was available when he was coming up in the field. He often talks about the Burroughs machines and how much more advanced they were compared to our current CPUs and laments that for all the gains that Moore has given us, we have lost incredible amounts of speed via our architectures being aimed solely at C.

One anecdote that he likes to use is to compare the speed of Smalltalk running on the Xerox Alto computer with Smalltalk running on a current CPU that is 50,000x faster than the Alto. He notes that benchmarks run in both systems are only 50x faster, claiming that this means we've lost a factor of 1000x in efficiency just on the basis of using inferior architectures (at least inferior if your target language isn't C).

Part of me is thankful for the relentless push of x86 and the speed gains realized, but another part of me really regrets that all of the crazy architectures from the 70's and 80's have been lost.


The 1000x figure is probably an overstatement, as is the 50,000x figure.

The Alto's main memory had a cycle time of about 850 nsec, and could transfer 2 16-bit words per cycle: http://www.computer-refuge.org/bitsavers/pdf/xerox/parc/tech....

This gives a main memory bandwidth of roughly 5 MB/sec. A top-end single CPU system today has probably 25 GB/sec available to it, a factor of 5,000 more. Moreover, much of that is achieved through optimizing burst reads--actual sustained random access throughput is going to be much lower and the delta much less.

Given modern implementation techniques, the actual efficiency loss is probably on the order of 10x rather than 1000x. And much of it is the result of the memory wall, which has been driven by DRAM physics rather than micro-architecture. Doing a couple of memory lookups to support dynamic dispatch is a hell of a lot more expensive, relative to an ALU operation, these days than it was 30 years ago.


Kay is greatly exaggerating those figures, and tends to blame problems with the modern software stack on the hardware.

Dan Ingalls gave a talk in 2005 about the history of Smalltalk implementations in which he mentioned the Xerox NoteTaker. The NoteTaker was a PC powered by the 8086, and according to Ingalls executed Smalltalk VM bytecode at twice the speed of the Alto. Here is the link to the talk: http://www.youtube.com/watch?v=pACoq7r6KVI#t=42m50s and here is my analysis with more details on the specs and economics of the NoteTaker: http://carcaddar.blogspot.com/2012/01/personal-computer-youv...


What you're saying about the VM is a software problem, not a hardware one. You're right about the VT-x extensions, and read barriers for GC is exactly what Azul is trying to do with their kernel patches, but there was no reason why the GC couldn't have been moved into kernel space before virtualization extensions came along.

Same thing with stack machines vs registers (why would you ever want a stack machine for CPS-compiled code?), tagged arithmetic (SPARC has tagged arithmetic instructions, but it turns out pipelining makes "manual" tag-checking just as fast), etc.

If anything, a pipelined, superscalar RISC CPU benefits Lisp more than it does C.


> What you're saying about the VM is a software problem, not a hardware one.

The strict conceptual partitioning of software problems and hardware problems is quite passé these days. In the last 10 years, Intel and AMD have added a tremendous amount of very CISC-y functionality into x86 (e.g. string search instructions), in recognition of the fact that exploding transistor budgets make hardware the right place to implement certain things.

> but there was no reason why the GC couldn't have been moved into kernel space before virtualization extensions came along.

GC couldn't have been moved into kernel space because of the second part of my argument: UNIX hides hardware features not necessary to run C programs. The MMU can do quite a lot that is obscured behind the very limited mmap() abstraction.


I have a profound regret for what was lost in the transition to the Windows/Unix & C worlds. We've gained in the transition, but what was lost is so much disregarded with the `popularity=useful` metric that is so common.


Much love for lisp machines, but stack machines do not offer any performance advantages over register machines, and actually make optimization much harder. See Ungar's 1993 thesis on the Self 93 compiler for an early realization of this, where he examines what microcode / register windows / etc could do for him, and how he was able to do just as well with registers.


Several RISC chips for Lisp Machines were under development. Xerox, Symbolics (Sunstone), University of California (SPUR), had projects for that. The AI winter then killed it. The Lisp Machines then were ported as emulators to ALPHA (Symbolics), SPARC (Interlisp) and other processors.

"Also, note that the Sunstone project did address many of the competitive concerns, especially the continual mention of Sun in this analysis. The Sunstone project included a chip design for a platform meant to run Unix and C, as well as Lisp. It was a safe C exploiting the tagged architecture, for example, to allow checking of array bounds. And the Sunstone project was being produced on-time. But to back up the analysis of Symbolics’ priorities, it was cancelled as we were getting the first chips back from LSI Logic."


And then we'd have had people writing applications that Lisp isn't suited for complaining at what could have been if only we'd gone with a simple register architecture. Don't get me wrong - I believe in owning the whole stack, and I'd love to see some Lisp machines. I fully believe that something like this may reappear in the future. But let's not kid ourselves: none of these singular visions of simplicity is going to be good enough for everything.


It's not necessarily the case that Lisp machines would be the only way. I'd think more like what Intel is doing these days, adding instructions to SSE to speed up things like string processing.


This just reminds me that I really need to get the MacIvory board set that I bought from David Betz up and running... I have the old Mac, the AEK II with template, and so forth, just not the TIME. :(


I'm trying it out, but I can't seem to get past the "Please type the date and time:". Does anybody know the format that it expects to keep from dropping into the debugger?

I'm assuming that it should be getting the date and time from the network but that doesn't seem to be working either, although I'm not using a dedicated host like in the article. I'll do further exploration on a dedicated machine at a later date.


You tried "MM/DD/YYYY hh:mm:ss"? Uh, maybe add " PDT" or whatever is appropriate for your timezone.

(It's been a couple of years since I booted my LispM, but I used to know how to do this :-)


I tried your format, and every time I get

  Error: Unable to set calendar clock
  
  TIME:SET-CALENDAR-CLOCK
    Arg 0 (TIME:NEW-TIME): 354054620
and then drop into a debugger. Are there any steps I could do when it can't set the time to get the rest of the system to boot?

Edit: the emulator is outputting:

  arithmeticexception; file stub/output10 line 215
when I enter the time, so there must be an emulation error of some sort.


Are you typing the year as "2012" rather than "12"? The universal time you have shown here decodes as 3/22/1911, which is close enough to 100 years ago to make me wonder. Either that, or you copied the display contents by hand and left off a digit.


Yes, I did type in 2012 and I did copy the display contents by hand due to no copy/paste from the terminal window. I think I got it right, but I might have missed a digit.


Have you had any luck with this? I'm running into the same issue.


It seems not to be Y2K safe. Try changing your date to something around the '80s, having done that I didn't get the "Please type the date..." prompt.


Why hasn't anyone ported (Open)Genera to modern architectures? What are the barriers?


That's what OpenGenera is. Originally it ran on the Alpha. Brad Parker ported it to AMD64.


Here's the least-hairy part of the disk drivers in the LMI source code: http://code.google.com/p/jrm-code-project/source/browse/trun... I haven't looked at any Genera source code, but I suspect it is similarly hairy. It's a lot of work.


Hairy, especially because most of us haven't done systems programming like that, but how cool is it to see "(without-interrupts ..."??


Outch..are there any other Lisp OSes? I remember finding a list sometime ago but none of them showed any promise AFAIK.



@apgwoz: Thanks for sharing. Going to take a crack at it, and also try to make a VirtualBox OpenGenera appliance. No eta though, but the aspiration exists.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: