Hacker News new | past | comments | ask | show | jobs | submit login
Lisp is sin (msdn.com)
103 points by iamelgringo on Sept 1, 2009 | hide | past | favorite | 53 comments



> But I'm willing to bet that a lot more developers will be able to understand this since this is in a programming language they understand well.

This is one argument I don't buy. He talks about the Mort programmers never being able to understand Lisp. I am willing to bet, any Mort programmer who looked at this C# would not be able to understand it. I'm sure they could take advantage of it, but write it themselves, I don't think so. And anyone who could write this, can grasp Lisp themselves.


Yeah, he says that Lisp is too hard for the common man, and then sort of implies that this is because it doesn't use ALGOL derived syntax. I don't think that's the issue. It's not Mort who's fallen in love with ALGOL syntax, it's the CS graduate Elvis's who cling to their expensively acquired "skills" (priesthood memberships) with religious fervor (he says, as a CS student).

I program in C#, and lambdas and delegates are nice (I can't use expression trees because the company I work for standardizes on .NET 2.0). But the system is too gnarly, still verbose as fuck, and nowhere near as nice as Lisp (as Krishnan freely admits). This just increases the need for sophisticated developer tools and makes things more obscurantist, not less. C# is just another variation on Java, i.e. a way to drag Elvis types 10% closer to Lisp without depriving them of their oh so precious ALGOL syntax.

Fuck the priesthood, seriously.


Something I've never gotten around to is building an algolish-lisp -> lisp translator.

    [a, b, c] -> '(a b c)

    f(x) -> (f x)
These two translations are strictly syntatic modifications to lisp.

Some syntactic sugar:

    {
        f(x)
        g(x)
    } ->
    (progn
        (f x)
        (g x)
    )
With these three tweaks, we are probably 90% of the way to recruiting the common man (if syntax is really what puts them off).


You can do these within PLT scheme quite easily. In fact, PLT Scheme comes with an Algol 60 reader that reads Algol 60 and turns it into scheme forms which are then executed. It's a cute trick.

http://docs.plt-scheme.org/algol60/index.html


This looks great for trivial code, but what about more complex code?

    destructuring-bind([a b &rest c] some-list lambda([x] +(a b x)))
or

    loop(for i from 1 to 10 collect 1+(i))
or

    with-current-buffer(get-buffer-create("foo")
      save-excursion(
        insert(text)
        buffer-substring-no-properties(point-min() point-max())))
This looks horrible. There is an advantage in using the same syntax for both lists and function application.


I've done things like this in Scheme - not robustly, but it's worth pursuing IMHO. I can't stand math formulas in SEXP notation. Array/slice/hash lookup notation would be nice too.

Lisp gets it right by excluding sugar from the language kernel, but a "batteries-included Lisp" should include it in a standard library.


I tried this with arc (worked with anarki based in arc2)

http://arclanguage.org/item?id=8172

The syntax is inspired by McCarthy's in his paper from 1958. It works, but i don't find it useful.


I never did understand this averseness to anything not algol-derived... I personally like Lisps syntax (fuck off, even though it looks different and uses "s-expressions" and code as data and all the rest - ITS STILL SYNTAX) and I like point free syntax and forth-like syntax and prolog intrigues me and ML has interesting syntax too and ...

My point is: ITS JUST SYNTAX! Syntax doesn't define the language, just the look. I do like a nice, clean, concise syntax though.


Syntax does define the language. It's very difficult to write macros for an ALGOL-style language. It's possible but very difficult. Some people are put off by Common Lisp's LOOP macro or the FORMAT function and will try and avoid using the more complex features of them. Those are small examples of how the syntax encourages, or discourages, the use of language features.

Hell, I even hate doing any complex shell scripting because I can never remember the difference between [ ] and [[ ]] and ( ) when using conditionals.


And anyone who could write this, can grasp Lisp themselves.

Write the memorizer? That's basic functional programming, but what does that have to do with Lisp? You could write it in JavaScript:

    function memoize(f) {
        var memory = {};
        return function memoized(arg) {
            return arg in memory ? memory[arg] : (memory[arg] = f(arg));
        }
    }
The big thing Lisp has left is reader and compile-time macros, and other systems [1] might be able to do both better pretty soon. Yes, Lisp has a lot to teach people who think that programming languages are passed down immutable from the superior minds of GvR or Stroustrup, but once you realize the enormous variety of decisions made in designing a language, it becomes clear that almost no number of languages will fill the space of possibilities.

A lot of ways, languages are like religions - they're part of social structure, and part of people's identity, and they tell you how to pray, when to fast, how to allocate and free memory, how to traverse the call stack, etc. But when you think about it hard enough, it's clear that all of these decisions are better made by individuals, not committees or institutions.

1. http://piumarta.com/software/cola/


> I know that if I write a C# program today that it can be called by a Boo program which in turn can be called by IronPython.

This is another reason we should be grateful for Clojure. It will have a .Net implementation soon, too.


I see two problems with this statement:

1. This only works if you're willing to seclude yourself in the Microsoft Yurt and never leave. Technical benefits aside, history has shown MS is not afraid to abandon projects. It's a dangerous position to take, in my opinion.

And I sincerely doubt a port of Clojure to .NET would result in a 0-difference result. We're back to the exact same problem that plagues other lisp implementations right now, which is to say we haven't really bought much.

2. It's not that different from how things are in the C world, save that you'd share more of your runtime infrastructure. My job has been full of instances where an executable interacts with a diverse set of languages. One example I work on daily includes Erlang, Ruby, Prolog, and C++ all in one space, calling around (in a structured way, of course). The implication that you cannot make modern languages communicate without a single umbrella runtime is demonstrably false.


What I meant was now you have an ecology of languages on the jvm (Clojure, JRuby, JPython, Scala, Groovy...), and on top of this a .NET port is in the works. Sorry if I was a bit unclear.


For some value of "soon"


http://github.com/richhickey/clojure-clr/tree/master

http://blog.n01se.net/?p=41

http://clojure.org/todo

As a rule, things move pretty fast in Clojure-land. And in the beginning it was dual-platform, so it's not really a new idea either.


From [ http://clojure.blip.tv/file/1313398/ ] I got the impression that Clojure on the CLR was back-burner at best. There appears to be only one committer to the CLR source base since February (possibly longer).


I believe the greater push is to write Clojure in Clojure which will (hopefully) facilitate targetting other platforms.


Yes, like some did with Squeak. You can generate a vm using Squeak itself, so it will easily flow to other places


And why would be able to run it on .NET would be some reason to be grateful?


I thought Rich Hickey abandoned the .NET version of Clojure a couple years ago?


Yes, but it's alive again. Though I'm more excited about the clojure-in-clojure thing - it will eventually make it easily portable on a whole lot of platforms.


>> one of the driving forces was to let non-geeks build software

He makes the extremely important point that the technology has failed if ordinary people can't use it to get their own work done. However, I've personally seen a project where the aim was essentially to let non-programmers program, and the result was horrendously messy. In large part, I think that's because a lot of people ended up developing that had no understanding of the underlying concepts, and as a result the output was extremely hacked together and unintuitive. I think it's great when people realize that the whole point of most software is to enable regular users to complete a task without worrying about the internal computation, but it's just as important to realize the need to _understand_ what is happening.


Shouldn't we ask ourselves if this goal is reasonable? Making arbitrary software is at the limits of our capacity right now, even if people devote their lives to it as a profession, passion, and an art.

It's one thing to take an approach like Apple Automator (a much beloved automation environment), but it's another entirely to say, "General Purpose Software should be within the grasp of the 'average person.'" It's not clear that the goal is even reasonable! A lot of software, to do what it does, requires the use of complex mathematics, algorithms, and cryptography that frequently even the implementors only vaguely understand. Manipulating those in an abstract and correct fashion is difficult in the extreme (e.g., the morass of timing attacks that have plagued modern implementations of cryptographic protocols). We can barely make software with smart people!


Making arbitrary software is at the limits of our capacity right now, even if people devote their lives to it as a profession, passion, and an art.

Yeah, you are exactly right. Programming languages need to be designed for the very best programmers -- they are the ones that need the productivity boost. For me, not understanding the language is never a problem. Making something "easier to understand" will not help me write software that goes together faster or ends up more reliable. It often has the opposite effect; Java's lack of ... everything, for the sake of simplicity, really slows me down. Comparatively, learning a new concept is much easier. If I start using a library that uses Arrows, I read the Arrow paper, spend a few days playing with it, and then I have one more tool that will help me be more productive every day for the rest of my life. Making a language that doesn't have arrows saves me reading a paper about them, but it wastes my time every day for the rest of my life.

So, "simple" programming languages -- do not want. I am not saying that you should start your "intro to computer programming" class with a long lecture about category theory, but I am saying that you should never hide things from people that want to know about them.


... slightly off topic, but do you have a good resource for learning Arrows? It's on my list of things to do, but I haven't dug around for a good example yet.


In terms of using "them", the HXT documentation made it pretty clear for me. (I put "them" in quotes because things like Arrows and Monads are really adjectives, not nouns. When used as nouns, people generally understand what you mean, but this confused me for a long time and I try not to confuse other people. Consider it my version of a "Monad tutorial". :)

In terms of theory, I found the "Applicative Programming with Effects" paper and the "Typeclassopedia" very helpful.

http://www.soi.city.ac.uk/~ross/papers/Applicative.html

http://haskell.org/sitewiki/images/8/85/TMR-Issue13.pdf

I started doing a real implementation of these data structures in Perl this week. (I say "real" because people have "added Monads to Perl" before, but they didn't really add Monads, they just added a "programmable semicolon".) Anyway, in doing so, I saw in great detail the relationships between the various types (specifically "liftM2 ($)" and "<*>"), and tried to generalize things as much as possible. The result was something like Arrows, and that helped me understand the "why" in addition to the "how". Writing test cases was also helpful to me; using these esoteric stuctures in "real code" helped me build up the intuition needed to advance farther.

Anyway, it took me three readings of the Applicative Programming with Effects paper to get Applicative Functors, and after that, Arrows were pretty easy. So I recommend that paper, and lots of poking around in ghci.

To bring this back on topic, yeah, it would be hard to teach this stuff to Joe Average. But a language that allows it is still important to develop. (I also learned firsthand that language design decisions to make it difficult to implement certain features. Implementing the equivalent of "instance Monoid b => Monoid (a -> b)" is almost impossible to do cleanly, due to the polymorphism of "mempty"; it's defined in Haskell as "mempty _ = mempty", and in Perl, we discard the type information before we have a chance to pick the "second" mempty.)


Horse hockey.

While I know that there are a lot of horror stories (and I can tell a few), some of the most productive "programmers" I've ever met used MS Access or Excel.

If you don't consider advanced usage of these applications as a form of programming, then you are illustrating the point: It is not only possible to "dumb down" programming, it is inevitable. So much so that you haven't even noticed it happening.


The only conclusion I can draw is that you didn't read my comment, because if you had you wouldn't try to counter my argument by circuitously agreeing with me. I am sure your friend was very productive with Access and Excel, and I'm sure they used advanced cryptographic protocols, solved tricky problems involving communications timing, handled cross-platform compatibility issues, programmed efficient inner rendering loops, etc.

I'm talking about "general purpose software," which encompasses a huge volume of space. Rare is the person who can competently address the wide variety of problems found in the world of software. Excel and Access, in my view, are perfect examples of paring down the acceptable problem space to something reasonable and making tools for people to solve those specific problems naturally. You can't write a 3d engine in excel, but that's okay, because the tool is not for that.

But this argument breaks down when you try to take it to a general-purpose software kit, which has to handle anything and everything. In this wide, unshielded world of hard problems you have to have the most capable and powerful tools available. Deliberately limiting tools in this environment is like saying, "Our river pilots have lots of experience handling barges, so it'd be counter productive to train them on oceangoing vessels, we'll just build bigger, sturdier barges for ocean travel." Yes, sailing on the open ocean requires more powerful and complex tools, but you appreciate those tools when the ocean gets rough.


> You can't write a 3d engine in excel

You can, a guy has done it. I'm only being pedantic because this is cool if you haven't seen it:

http://www.gamasutra.com/view/feature/3563/microsoft_excel_r...


That is really cool.


Heh... I know a guy who thinks he's too dumb to program in anything except Lisp (ALGOL syntax is hard! It's like math! :-)


I think you are talking about me


(Raises hand)


However, I've personally seen a project where the aim was essentially to let non-programmers program, and the result was horrendously messy.

So what? The goal of turning every user into a programmer is perfectly reasonable. Almost all users of the early systems and computers were also programmers. Even today, business users make use of Excel/Access and some macros to do some programming (even though it can be seen as basic).

Don't you remember a time when operating systems came with programming languages and compilers and actively promoted them to the user?

Oh, and HyperCard. There was another great example of "user's" programming. I've also seen some hideously ugly GIMP script-fu scripts but they worked and did the job of the "user" that wrote them.

In the end, the distinction between a user and a programmer is artificially created by software that doesn't view anybody else as smart enough to enter the monastary of the programmer.


This article is a couple years old. The author probably would have tried Clojure for his foray into Lisp had this been written today.


He rambles a lot, but he's right about the need for a new Lisp, and refers to the ILC'05 presentations by Dussud, Baker, and McCarthy on "Re-inventing Lisp". Those are some pretty radical proposals. (Summaries: http://www.findinglisp.com/blog/2005/06/ilc-2005-wednesday-r...).

I think the trouble with Lisp (and Scheme) is that there's so much cruft and inconsistency beneath the veneer of simplicity (see http://tnovelli.blogspot.com/2009/08/lisp-crisis.html). Lisp is a great language to study; I just wish it was more practical and popular. It sucks having to choose between awkward (Lisp) and inflexible (everything else). :-(


Your resources are pretty old, the "crisis" of car/cadr is just not there with modern, updated lisp implementations like PLT Scheme.

PLT Scheme is competitive in every way with Python and Ruby, I don't know why people keep ignoring it.


Umm... what's this scheme_make_pair(car,cdr) function in plt/src/mzscheme/src/list.c? That's a wart (the crisis is in my mind :-)

It doesn't have to be that way: Clojure has abstract sequence types, with cons/car/cdr wrappers in case you want them.


I fail to see how having a linked list datatype is a crisis. Use vectors if you don't like them. You can write a very large amount of complex, fully-functional scheme code without ever typing in any expression matching "c[ad]+r". PLT Scheme also comes with a host of sequence-agnostic (or sequence aware for performance) comprehensions:

Please refer to: http://doc.plt-scheme.org/guide/for.html


Compare the C# version of memoize to this one (from On Lisp). The C# one looks very long and ugly in comparison, so I hope it isn't the new Common Lisp.

(defun memoize (fn) (let ((cache (make-hash-table :test #'equal))) #'(lambda (&rest args) (multiple-value-bind (val win) (gethash args cache) (if win val (setf (gethash args cache) (apply fn args)))))))


Lisp certainly has many advantages, but general readability is not one of them. Maybe when talking about complex algorithms with a sufficiently designed DSL... Here's your code reformatted:

  (defun memoize (fn)
      (let ((cache (make-hash-table :test #'equal)))
      #'(lambda (&rest args)
          (multiple-value-bind (val win) (gethash args cache)
          (if win val
              (setf (gethash args cache) (apply fn args)))))))
And here is a prettier (use of TryGetValue), C# 3.0 (Func types, local variable type inference) version:

  static Func<TParam, TReturn> Memoize<TParam, TReturn>(Func<TParam, TReturn> func)
  {
      var memoDict = new Dictionary<TParam, TReturn>();
      return arg =>
      {                
          TReturn result;
          if (!memoDict.TryGetValue(arg, out result))
          {
              result = func(arg);
              memoDict[arg] = result;
          }
          return result;
      };
  }
  
The C# version is clearly longer, but less dense. Most of the noise in the C# version comes from the verbose type declarations. Almost anyone could read the C# code, provided they know that => means lambda and at least heard of a closure. That is not true of the Common Lisp version. As with everything in engineering, it is a balance. Lisp is absurdly powerful and C# is absurdly clear to read. As a programmer who does a lot of maintainence programming at work, I highly appreciate that attribute of the language.


I would format the function like this:

    (defun memoize (fn)
      (let ((cache (make-hash-table :test #'equal)))
        (lambda (&rest args)
          (multiple-value-bind (val win)
                               (gethash args cache)
            (if win
              val
              (setf (gethash args cache) (apply fn args)))))))
I'm a Lisp programmer and the C# arglist is already hard to parse with lots of noise - for me.

Second I don't look for { or ( but for the block indentation, so a { on a single line does not give me enough visual clues.

The 'return arg =>' form is not obvious to me. What does it do? Where dies TReturn come from, TParam?

Why is there }; and } ?


> Second I don't look for { or ( but for the block indentation, so a { on a single line does not give me enough visual clues.

We could argue for days about these sort of issues, and there are passionate ideas on either side of the fence.

  function
  {
    code;
    code;
  }
In that form, you have the beginning { and ending } for explicitly define the beginning and ending of the code block, and all of the code inside is also indented for good measure. Maybe it's redundant, but I don't see how it's wildly different from:

  function
    code;
    code;
> Why is there }; and } ?

Have you never programmer outside of lisp? I've never programmed in C#, but if 'arg =>' is the beginning of lambda, then it's because the definition of that line is return EXPRESSION;.

And in that case EXPRESSION = 'arg => { code }'. That's where the ';' comes from.


=> denotes a lambda. The left side is the args, the right side is the body. Lambdas are typically a single expression, but can have a statement body instead. A semicolon denotes the end of a statement.

  var square = x => x * x;

  var square = x => { return x * x; };


I don't know C#, but it seems to me that your C# version of memoize works only for functions with exactly one argument, while the CL version works for any kind of function.


Yeaaah.... It turns out that generic methods with variable arity is a sore spot for the .NET type system. The C# and F# teams are quite aware.

The typical/recommended/easy/practical solution is to just provide another overload, pack the args into an object array, and then call the original overload; completely ignoring the problem.

Alternatively, you could do some scary things with reflection...


Hmmm, I guess I should learn how to format text on hn when I want to make a point about readability. ;-)

Even so, with the caveat that you have to learn Common Lisp fist, I think the CL version is infinitely more clear and elegant. All that type information is redundant in the C# version, since in this case all you want to say about the types is that you don't care what they are.

multiple-value-bind is among the longest function name in CL, and doing something that python does much better with syntax alone, perhaps Python is the new Lisp.


"Almost anyone could read the C# code, provided they know that => means lambda and at least heard of a closure. That is not true of the Common Lisp version."

Aha. WTF? Ok, if I know what the code means, than I can read it. But that's only the case for C# and not for Common Lisp.


Code in unfamiliar languages always looks ugly. I know c# but not Lisp, and that just looks like ugly brackets salad to me. It can't help that your formatting as AWOL.

So your statement that The C# one looks very long and ugly is entirely subjective.


Of course it is, and can I coin the "ugly brackets salad" phrase?


I coined it, but you can have it. See also: angry fruit salad http://catb.org/jargon/html/A/angry-fruit-salad.html


If I remember it right, Joel's "Perils of Java Schools" was about college CS programs which should be a lot more fundamental than just practical. This guy's essay conflates the equivalent of weekend carpenters, building maintenance, and civil engineers into one set of "programmers".


Hm. If Lisp would be an "everybodys" language, what would be the Lisp then? Actually, the main success of lisp comes from its purity its scientific deepness. If you try to follow another goal, you just lose vision for your actual one. If somebody really needs an "everybodys" language, for example because he is a lawyer or something else, okay. There is Python, Javascript, Java and many other languages who try in different ways to be for everybody. And all of them are important. As is Lisp, how it is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: