Hacker News new | past | comments | ask | show | jobs | submit login
Why Lisp? (lisperator.net)
91 points by octopus on Jan 9, 2013 | hide | past | favorite | 69 comments



> This is helped by a great feature of the environment (Emacs/SLIME) which provides cross-referencing tools, for example, I can place the cursor on the name of a function and ask “where is this defined”, and it jumps to the place of the definition, which could be in my own code or in third party code. But I can also ask “where is this function called” or “where is this variable referenced”. This is extremely useful for code refactoring and it beats every “modern” IDE I've seen.

wow, rly? you have this in Eclipse (and probably every 'modern' IDE out there) for every language.


Or, you know, cscope?


Interesting that "Lisp" is assumed to be Common Lisp and that Scheme is assumed to be strictly a learning tool. I've found Racket to be much more useful than any other Common Lisp/Scheme.


I have to agree with that. I find Scheme to be a lot cleaner, more modern, and easier to use than Common Lisp.

I really only see two reasons to choose Common Lisp over Scheme. The first is that CL has a ton of libraries compared to most Schemes (though Racket has quite a few of its own.. I'm not sure how those two would compare on the library front). The second reason is that CL has a much larger community.


Well, Common Lisp is Lisp, Racket is Scheme, they are different. This perpetual confusion is just useless. Let's call things by their names. You don't call C++ C or php perl, right?


Common Lisp is Lisp.

Racket is Scheme.

Scheme is Lisp.

SBCL is Common Lisp.


Let's translate it to C:

C99 is C.

C++0x is C++.

C++ is C.

Clang is C99.

Do all the lines seem correct?


I disagree with your translation. I admit that "SBCL is Common Lisp" is a stretch, but no more than your original "Racket is Scheme". How about instead:

LISP 1.5 is like C

Common Lisp is like C++

Scheme is like Objective-C

(ignoring all the other Lisp dialects that have fallen by the wayside: Maclisp, Interlisp, ZetaLisp)


Here's a more detailed answer: http://lisp-univ-etc.blogspot.com/2013/01/common-lisp-is-jus...

I also agree, that Racket is not Scheme, it's a new language. But I stand, that Common Lisp is still a Lisp. Lisp 1.5 would be BCPL or B :) (Why? Because C is in active use, while Lisp 1.5 isn't, so your analogy isn't quite correct)


few more slogans:

* Ruby's emphasis on writing DLS came from Lisp world, where this practice have been developed.

* Some of Python's elegant forms - with, lambda and various comprehensions - like this: return (x, y) or a = [3, 4, 5] came form Lisp.

* Lisp's macros are unmatched. Those of C/C++ are mere string substitution.

* REPL is Lisp's invention.

* Shadowing (re-binding) of variables and procedures used routinely - we don't need any special "dependency injection".

* Logically, everything is a pointer, so, technically, every "object" if a first-class citizen.

* Lisp is mostly-functional, so, in cases where you must have mutation, you just do it.

Enough for today.) Over-excitement is a vulnerability.)


It's not so much a question of "Why Lisp?" as it is a question of "Which Lisp?," if you ask me.


Today you have 3 main alternatives:

* Common Lisp (used by the author of the article)

* Scheme

* Clojure


Scheme is not a single alternative. Racket [0] is a Scheme, which can be compared to things like Python or Ruby. There are other implementations, which should be used for different uses. For example, Chibi Scheme [1] is a nice alternative to Lua, but not to Python or Ruby.

Given your other two Lisps, I rather make this list:

* Common Lisp (which implementation?)

* Racket

* Clojure

[0] http://racket-lang.org/ [1] http://synthcode.com/wiki/chibi-scheme


I think Chicken[1] is a very attractive Scheme alternative. It's full-featured, fast, compiles down to C, has a very nice library (egg) system, and a relatively large and active user community (as far as Schemes go).

[1] - http://call-cc.org/


Hear, hear; I've been using Chicken for commercial and personal projects for about five years now. It keeps getting better.

Occasionally, Racket has a package that I'd like to port (e.g. datalog); but I find it otherwise overbearing.

Chicken has found some local maximum of efficiency and availability of useful libraries.


I would add Tiny Scheme as an embedding scripting language or as base of a DSL


One thing that intrigues me about clojure is that it affords some unique benefits, by virtue of being a lisp. For fun I want to try to write a clojure program which generates C code for a tiny game engine. e.g. you run the program, and it spits out .c and .h files which you can then compile and run.

Then, the challenge is to make it generate java code, too. I.e. in addition to the C code, which you can build and run, it also generates that same functionality in Java code, which you can run on JRE. So the game engine itself would be specified in clojure, and yet it would be "automatically implemented" across two very-different platforms! It's a fun challenge, I think. Maybe not doable, or doable only in a horrible way, but still fun.

But the point is this: that's a great example of a problem which would be pretty much impossible in a non-lisp language. In order to achieve that in Python, you'd probably be forced to implement a lightweight lisp within Python.

So that may demonstrate an answer to the question, "Why Lisp?" ... certain problems can be solved only in lisp, or by reimplementing the list-processing concepts of lisp.


There is already a project that translates ClojureScript to Scheme https://github.com/takeoutweight/clojure-scheme from which you can generate C code. Theoretically you could use the generated C code on any OS with a standard C compiler.


Why generate Java code if Clojure runs on the JVM?

Why is it "pretty much impossible" generate the game engine C code in a non-lisp language? What is it about Lisp that makes it possible?


The motivation behind "generating Java" was that the game could then be deployed in a browser, just like Minecraft is. Maybe Clojure can already do that though. I haven't looked yet. =)

Clojure runs on the JVM, but perhaps Clojure is expensive in the context of a 60-frame-per-second realtime app. I doubt it would be significantly more expensive than Java, but that's something to test.

So by generating Java, then it would be an interesting system because it winds up producing the same sort of code which Notch made by hand to deploy Minecraft to a browser. Yet it's derived "from the clojure code".

And yet it's also generating C code, which is also derived "from the clojure code". So it's very meta, which I can't resist exploring.

Why is it "pretty much impossible" generate the game engine C code in a non-lisp language?

Well, it's straightforward for you write a program which generates some C code. But how about "either C, or Java"? Or "either C, or Java, or Haskell", ... etc.

You'll soon discover that the nature of the implementation is completely different for different languages. A Haskell game engine would look utterly nothing like a C implementation would look.

So the situation is, you would need an intermediate source code representation of your game engine which supports introspection. And whatever data structure you use must also be able to be modified during the "code generation", because it's not until the codegen step that you know what sort of special-casing you'll have to do (in order to generate code which implements your game engine in e.g. haskell, or brainfuck, or... etc. Point being: each of those has a different set of special-case requirements, and you can't know which until you're already in the codegen phase.)

So, you could use a different language, and you could represent your "game engine source code" as, say, JSON. And you could have conventions for what your various operations "within that JSON string" actually mean -- e.g. whether you use infix notation, or prefix notation, or.. etc.

And then you could have a notation for how to specify a function, within that JSON string.

And so on.

But then, at the end of it all, you wind up with a poorly implemented version of what lisp already offers you. It lets you do all of that automatically, because those features are built right in to the language. Indeed, the language is the primary reason the features are possible: Everything is a list, and everything has access to all lists at all phases of the program's lifespan. You can run at compile time, or compile at run time, etc. You have access to the entire syntax tree at all times, in every program you write, which enables you to manipulate it. You simply can't do that with e.g. Python. Hence the requirement for an "intermediate thing, which you can manipulate" -- we happened to decide on using JSON as the representation, but you could've easily decided to use something else, etc. The point is, that "intermediate representation" is just a poorly-reimplemented version of the toolset which Lisp gives you by default.


I wonder if Python's metaclasses would get you what you're describing. Some thought would have to go into the primitives required to generate the constructs for the destination language(s), but metaclasses would allow you to serialize to different languages fairly easily once the primitives are in place. This approach isn't nearly as elegant as Lisp, but I think it takes what you're describing out of the "pretty much impossible" realm.

Overall though I agree with you, I just had a slight disagreement with the "pretty much impossible" claim.


I just had a slight disagreement with the "pretty much impossible" claim.

Thanks for pointing that out. Being a scientist, I too would take issue with this :) hence, I'm extremely interested in discovering whether this would be a workable, real-world solition for the problem I described.

I'm out of time right now, but would you mind emailing me? (Address is on my profile page.) I'd really love to get your thoughts about a couple followup questions that I have.

Thanks again!


And, It's really hard to choose one! All of 3 dialects has own unique benefits and lacks.


What does Clojure lacks in the language?


It's not the language that has problems, it's the environment. If you're already living in Java land, being tied to the JVM is a big plus. If you're not immersed in Java, you need to learn Java (and the libraries) and Clojure at the same time.


A similar point - if you want to interface with something written in C (i.e. anything not written in Java), it's going to be a lot easier in CL than in Clojure.


You don't need to know anything about java to learn and use Clojure.


then you have an error.

    user=> (+ "a" 1)
    ClassCastException java.lang.String cannot be cast to java.lang.Number
    clojure.lang.Numbers.add (Numbers.java:126)
then you look at a backtrace...


You still don't need to know anything about java to figure that one out.

I realize it's a contrived example, and I'm sure there exists an example where it would be critical to understand java to figure out what the deal is, but I haven't found it yet and I've been mucking in clojure for at least a year now.


I haven't worked with Java in a decade and it still bothers me that it has a NullPointerException when you can't actually manipulate pointers.


Clojure adds syntax, which breaks some of the inherent symmetry of Lisp slightly[0]. This isn't something you're likely to notice unless you do a lot of heavy metaprogramming.

Clojure cannot guarantee elimination of tail calls, because of the limitations of the underlying JVM. This also means that it can't properly handle corecursion in the same way that pretty much every single other Lisp can.

(Both of these have been discussed before many times before on HN, so I can pre-empt the next comment in this thread, which will be someone pointing out that Clojure provides "loop" and the "recur" macro - to which I'd respond, yes, they're logically equivalent in the end, but having to force the transformation to a loop explicitly breaks the paradigm, which for me is the whole point of using a Lisp. This is one of those topics that can be discussed ad nauseum with no "conclusion", so it's not worth spending too much time on it.)

Lastly, anytime you're dealing with binary compatibility between various JVM languages, the abstraction is inherently a bit leaky. I haven't used Clojure myself, so I can't comment specifically there, but from my experience with using Java libraries in Scala, I can testify that some of the warts of Java end up leaking into Scala code. Nothing debilitating, just a bit frustrating[1].

[0] Racket does too, but the "syntax" added (square brackets) has the same semantics, so it's really just an equivalent token - an alias, if you will.

[1] One particular example I remember has to do with how Foo.class in Java works, and how a library dependent on this particular pattern has to be used in Scala - it just gets a bit messy.


> Clojure adds syntax, which breaks some of the inherent symmetry of Lisp slightly[0].

In what way does this break the symmetry?

Lisp source code is a Lisp data structure (or becomes one when read by the reader). In Common Lisp and Scheme, that structure is either an atom (an integer, a string, a symbol, etc.), or a list consisting of cons cells. In Clojure, that structure can also be a vector or a map. This is enabled by the fact that vectors and maps have their own literal syntax.

I was uneasy about this in the beginning as well, but then I came to the conclusion that this is not unlispy at all.


Symmetry may not have been the correct word, since it does provide referential transparency but it certainly does add an extra layer of complexity to the parsing, even if it's only one extra step. Furthermore, I would argue that the additional syntax isn't necessary, which is my biggest beef with it - since you can convey the semantics of a vector or a map without altering the syntax at all, there's no reason to complicate the syntax any more than needed.

> In Common Lisp and Scheme, that structure is either an atom (an integer, a string, a symbol, etc.), or a list consisting of cons cells.

Let me fix that for you: in other Lisps, an item is either an atom or a cons cell. There's "no such thing" as a list in Lisp.

There's a huge difference between having an option with two outcomes (S -> atom | cons) and an option with three outcomes. In computer science, we count "zero, one, many" - booleans are an example of this. By adding a third option, we've stepped out of the realm of the binary into the "many", and that's a much messier world to deal with.


> I would argue that the additional syntax isn't necessary, which is my biggest beef with it - since you can convey the semantics of a vector or a map without altering the syntax at all, there's no reason to complicate the syntax any more than needed.

You could say the same about the quote, backquote, unquote and unquote-splicing syntactic sugar being built into the reader. It is redundant, and yet it's there in most Lisps -- because it helps readability/maintainability at the cost of the little complexity it adds.

> Let me fix that for you: in other Lisps, an item is either an atom or a cons cell.

In Common Lisp, it is only correct insofar as the language defines "atom" as "not a cons cell" [1], contrary to the intuitive understanding that it's an indivisible entity. E.g., CL vectors are atoms, even though they have more in common with lists than, say, symbols. And they do have literal syntax, like #(1 2 3). How is that different from Clojure's [1 2 3], save the different type of parens?

[1]: http://www.ai.mit.edu/projects/iiip/doc/CommonLISP/HyperSpec...


This has been in Lisp since the big flood.


Common Lisp:

    * #(1 2 3)

    #(1 2 3)

    * (type-of #(1 2 3))

    (SIMPLE-VECTOR 3)


Being a bit of a pedant: do you mean "mutual recursion" instead of "corecursion"? The former means multiple functions calling each other, thus being recursive indirectly. The latter is more like an inverse of recursion: instead of starting with some data and reducing to base cases, you start with base cases and produce (co)data.

Defining a lazy list in terms of itself is an example of corecursion, which Clojure can do (albeit a bit awkwardly because you have to make the laziness explicit).


I wonder if, in the future, this limitation of Clojure will go away. I know there has been talk of the JVM folk adding things like support for tail recursion. That said, I'm not smart enough to know whether talking about adding support for tail recursion translates to "Once we're agreed, let us go ahead and solve this tractable problem," or if it translates to "Gee this would be nice, but it's wicked hard."


> support for tail recursion

Let's be clear - it "supports" tail recursion; it just uses more than a constant amount of space on the stack to handle the calls - in other words, it doesn't transform the recursive call to a loop.

Eliminating tail calls is by no means a hard problem in the general case - it's on the order of something you might be expected to do in an introductory compilers class.

The problem is particular to the JVM. As I understand it, is due to the fact that the JVM was architected in a way that doesn't allow it to guarantee that a tail call can be transformed into a loop. These techniques certainly existed in the early 90s (they've existed since, what, the 60s?), but at the time, I guess people didn't think it was a priority.

I have no further knowledge of the JVM, so I can't really comment on whether or not they'll be able (or willing) to add the support, but the problem is dealing with legacy designs/systems, not a difficult problem of CS theory.


> Eliminating tail calls is by no means a hard problem in the general case - it's on the order of something you might be expected to do in an introductory compilers class.

Actually it is problematic. Especially the interaction with dynamically scoped constructs...


It's depending on project/person. For me it's JVM (but of course JVM is big benefit also), mostly because with JVM I'm not able to create library which might be used in native code for example in iOs app (theoretical it's possible with ECL,Clozure CL or "Scheme to C" compiler).


Common Lisp. Everything else is so much Lisp as php is perl or C++ is C.


There's only a few reasons to use Lisp. It's educational from a math/comp-sci aspect, you enjoy programming in it, and you can make money doing so. Career wise, it's way down on the list of languages/environments in which I would choose to do my daily coding. (And I don't currently have my brother-in-law calling me on a weekly basis to assist in automating various tasks in AutoLisp... )


There are more reasons to use Lisp actually: the biggest one of which is, probably, that many programs become much simpler in it.

But, actually, such argument is meaningless. The only meaningful argument would be to compare the pros and cons of using Lisp versus some other language for a specific field with some specific constraints.

For instance, I won't be trying to use Lisp over JavaScript for writing Firefox plugins, but, probably I would use it over Objective C for this task ;)


If your only source of Lisp exposure is AutoLisp, that would explain the dislike.

AutoLisp is very primitive even when compared with minimalist Scheme interpreters.


I'm a big Lisp dialects proponent but I don't like that article very much.

It's very naive and the author is unexperienced IMHO.

For example the author writes: "I can place the cursor on the name of a function and ask “where is this defined”, and it jumps to the place of the definition, which could be in my own code or in third party code. But I can also ask “where is this function called” or “where is this variable referenced”. This is extremely useful for code refactoring and it beats every “modern” IDE I've seen."

Ouch.

The author doesn't seem to be very knowledgeable of "best practices" and IDEs from the last ten years available in the Java/C# world.

Lisp dialects can really shine in various ways but "refactoring" ain't exactly one area where Emacs (which he mentions and which I do love) beats "modern IDEs".

Apparently the author's only experience comes from old JavaScript development environments and some Perl hacking. He's hardly "connected" with real-word, modern, way of developing software.

His description as to how he's upgrading servers by manually copying binaries also shows he's disconnected from actual enterprise deployment techniques.

Overall it feels very amateurish.

But kudos for picking a Lisp dialect and sticking to it and reaping the benefits of that smart choice...


Yeah, the author obviously has never used Visual Studio (Or Eclipse or SlickEdit or...) in the last decade or so.

However, the "real world deployment" thing is kind of a spectrum. There's definitely the super-tight provisioning/deployments with ec2/puppet/jenkins/etc, but it goes all the way down to "have debian and hand-install widgets". That seems to depend on the organization's willingness to buy into the "cloud" idea, which seems to be regulated by the leadership's comfort with software.


Many people don't use IDEs anymore, I guess the only field IDEs are still dominant is the corporate environment. (And some LISP shops obviously ;)) So even if he hasn't used Visual C#, Netbeans or Eclipse yet, this doesn't necasserily mean that he is unexperienced. There is still software development going on beyond enterprise. ;)


I don't know, plenty of game developers, android developers and iOS developers are IDE users.


So far as I know, Visual Studio won't show you the source code to length or read, whereas Lisp will.


You sure? I don't know if VS will by default, but (at least on my machine) it's installed under ${VS11}/VC/crt/src so I'm sure it's possible - not sure if it's the complete sources, though.

Eclipse or IntelliJ will for Java if you install the JDK, though, and I'm pretty sure you can install the libc++ sources for XCode and go through them if you want (don't quite me on that though).


You do have a point.

I was thinking more along the lines of drilling down into the .net framework.


Oh, yeah, it won't do that. I keep dotPeek around for that.


Except many popular, monder lisps still don't show you that. Native functions often can't be shown. Bummer, huh?


I am speaking about SBCL, in particular; don't know about others. You can drill all the way down.


I think some primitives are still unexposed. Cons, if, etc?


Well, so far I can drill into if, cons, unless, let*, let.


You can drill into the assembly code generated.


I dislike most of these "modern IDEs". They seem to encourage Big Programs that shouldn't be written in the first place. http://michaelochurch.wordpress.com/2013/01/09/ide-culture-v... (IDE culture vs. Unix philosophy).

However, you are right that, in terms of features, they deliver, if you keep your directory structure, language choice, and development practices within certain bounds.


Yes... I mean even if the programs are big, IDEs encourage a massive footprint of just about anything. (I would not go so far to declare big programs per se as a bad thing.) Massive build instructions, massive Interface declarations, massive amount of stub files... Things happen with IDEs that would never happen when using a plain editor.

Probably Java has a part of its verbose reputation due to the abuse of IDEs. It's just so easy to use 60 letter function names, if you used a regular editor, people would not do this because their hands would start to hurt.


>Massive build instructions […] would never happen when using a plain editor.

How many non trivial projects do you compile by hand rather than using an automated build system (make etc)? How does autoconf or cmake prevent or discourage you from using 'massive build instructions'? Why is turning on more warnings or other compiler options a bad thing?

>Probably Java has a part of its verbose reputation due to the abuse of IDEs. It's just so easy to use 60 letter function names, if you used a regular editor, people would not do this because their hands would start to hurt.

You are talking about code-completion. What 'regular editor' do you use that doesn't have at least basic support for code-completion? pico/nano? Notepad?

When these arguments/rants come up, it seems that the definition of 'IDE' changes to fit the argument being made. If somebody suggests they prefer to use an IDE because it has code-completion, they are quickly told that plain editor X also has code-completion. Then it is argued that IDEs are bad because they have code-completion which encourages you to write verbose code. This seems to extend to many other 'IDE' features: I've seen people argue against inline documentation lookup, against syntax highlighting, against a 'build' or compile command in the editor, against 'jump to definition', against interactive debuggers. I've also seen people argue that all those things are available in 'plain editors' and thus there is no need to use an IDE to get them.

At this point I've no idea at all what the difference is between an IDE and 'plain editor'. Other than truly basic editors that hardly anybody uses to code, I think just about everything could be called an IDE.


Agreed, my workflow is basically the same whether I use an editor or IDE.

The only difference being that sometimes I wish the editor had more features and sometimes I wish the IDE had less stuff on the screen so I could just see the code.

It is however a lot easier to configure an IDE into a dumb editor than it is the other way around.


After a few decades of emacs and Common Lisp, I strongly prefer the free community IntelliJ with the Clojure plugin.

Sure, I have emacs setup for Clojure development, I just don't use it anymore, preferring IntelliJ. Times change :-)


Why?


I find that I can work a little bit faster and more efficiently.

I admit that it is a close call.


The fundamental question is how we compose things. The Unix way is to compose small programs via byte streams. Another way is to link lots of libraries via function calls. Yet another way is the image approach taken by Smalltalk and Common Lisp. These approaches all have tradeoffs and their own version of the dependency hell.

Thanks to the batteries-included philosophy a few lines of python can yield power, which would be a Big Program in the early days of Unix. I assume nobody would suggest that Python's glob library should exec the find executable or something.


@qznc that's an intriguing formulation. Could you elaborate a bit?

1) What do you mean by "image approach"? 2) What example are you thinking of for the "link lots of libraries via function calls" approach?

The trichotomy I am familiar with is to divide things up into lisp vs c/unix vs smalltalk schools, but that doesn't seem to map to what you are saying.


As spc476 answered 1), I'll just expand on 2).

Emacs would be an example of the link approach. It includes an editor and an email program, which are composed into a single program/image. In contrast, the Unix functionality (vim,sendmail) is split into separate programs.

What is the difference between a function call and a program invocation? In both cases we have a name and maybe some arguments. In both cases the computer executes some code bound to that name parameterized with our arguments. The difference is primarily, how expensive is the name resolution. A shell command requires inspecting $PATH, looking through directories, fork/exec another program, which is a lot syscalls. A function call in the best case has been compiled to a "call" instruction, whose cost depends on the branch prediction. Of course, there are various levels in between (dynamically linked libraries, plugins, shell builtins, etc). There are more complex composition methods like SAAS or cloud computing. There are more efficient methods (inlining optimization).

Semantically, it is interesting when a name is resolved. Compiled languages (C,Go) usually do this statically at compile time, where C is even more restrictive as it only looks above in the code. Dynamic languages (Python,Ruby) do it at runtime. In general, early binding is usually good for performance, while late binding is better for flexibility.


For "image" read, a runnable core file that can be updated (from within the program) and saved. Seriously. Forth and Smalltalk are two languages that are typically "image" based.

Because of that, the concept of a "program" might not make sense, because it's more of an operating environment. And even if you do buy the concept of a "program", each program is, in essence, a potential development environment (if the REPL is exposed---it can be sometimes be hidden or even modified). As such, a "program upgrade" is (in my opinion) insanely difficult to manage in such an environment, as the customer may have extended the program in an incompatible way with respect to the upgrade.

Also, the concept of "source code" might not exist as we understand it (a human readable set of instructions the computer will carry out stored in a file or files). Forth will compile the code directly; Smalltalk (disclaimer: I have no experience with Smalltalk) might store the code you type somewhere, so it can be viewed and edited, but there's no guarantee it's a "file".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: