Hacker News new | past | comments | ask | show | jobs | submit login
How Lisp is Going to Save the World (landoflisp.com)
382 points by smartial_arts on Jan 9, 2013 | hide | past | favorite | 229 comments



Great story. Unfortunately, the API documentation and tutorial writing guilds went extinct decades ago and has never been able to return to Lispland. :)


This is how false misperceptions are spread.

I'd ask you to mention a lisp library which you use but find under-documented (as another reply already has), but I don't think anyone with actual first-hand knowledge of the CL ecosystem would make this comment (even though I'm sure it recieved many enthusiastic if uninformed upvotes).

Lets look at some popular libraries with documentation. - Alexandria http://common-lisp.net/project/alexandria/draft/alexandria.h... - CL-PPCRE http://weitz.de/cl-ppcre/ - metabang-bind http://common-lisp.net/project/metabang-bind/ - not to mention the standard by which all other language documentation should be judged, the CL hyperspec http://www.lispworks.com/documentation/HyperSpec/Front/

And with sites like http://www.cliki.net/, http://common-lisp.net/ and http://www.gigamonkeys.com/book/ and many other full length books, if you can't find lisp documentation you're not looking very hard.


Lisp is very under documented compared to other languages. This makes learning LISP extremely difficult.

Let's take a look at HyperSpec, I follow your link and I'm greeted with a mostly blank (and very ugly) page. Ok there is something called starting points, I guess I'll start there. That page just lists all the indexes. Ok let's try chapters, that sounds like a place to start. Oh yay, another page with nothing but links. Ok, let's try introduction. Again, just a mostly empty page with links. Most people would have given up by now. But lets keep going. Let's try 1.1, which takes us to ANOTHER page of links. I'm feel like I'm getting the run around here. Lets click 1.1.1 and see what happens. Finally I get a page that isn't all links, and it's a single paragraph. What a waste of time.

Let's try another language, how about JavaScript? Google gives me this link: https://developer.mozilla.org/en-US/docs/JavaScript Some helpful information on the first page! Do I want a reference? Do I want a guide? Do I want a re-introduction? Maybe some sites with courses on JavaScript! Wow, that's really useful. Lets just start with the guide because it's "Our primary guide about how to program with JavaScript" and that sounds about right. It takes me to a list of links, Yuck, let's try the fist one called "JavaScript Guide" (Isn't that what I clicked on to get here?). Yup, that link just refreshes the page. That sucks, let's try the second link "About this Guide". A page that describes what I should already know, how to find my way around the guide, and tips for diving into JavaScript.

This is the problem with LISP documentation. At best, if I know exactly what I am looking for I find a giant index. At worst I have no idea what I'm looking for and I get snooty comments from LISPers.

The point is you shouldn't make people look really hard to find information. Good documentation should be easy to find and navigate.


The hyperspec is standard document, not an introduction. It's an invaluable reference you use after you get familiar with the language. I also have emacs set up so that I can look up symbols from within my source, and have the relevant page open up in a w3m window in emacs. I use it constantly and miss the same level of quality and clarity in other language documentation, but you need to get used to it's style and format. If you want to get started with lisp, you go to http://www.gigamonkeys.com/book/ and http://www.cliki.net/

Learning lisp wasn't any more difficult for me than learning any other language. "Practical common lisp" is well written. "Object oriented programming in common lisp" is a well written book as well, "Land of lisp" was fun, and not too bad either.

The core language is extremely well documented and understood. As for the libraries, some of them are well documented, some of them aren't. After the docs for RESTAS started going slightly out of date, I just opened the source and read it all in an afternoon(~600loc), and mostly understood it. I prefer not to have to do that, but claiming it makes it extremely difficult to learn, I don't know, to me it's just an inconvenience, and I have a much better understanding of the framework now, which I couldn't get from just reading docs. And some libraries are so small, I can just figure them all by inspecting the package in slime, and reading the docstrings. Usually there is at least example code. The situation isn't perfect, but it is more than possible to use lisp libraries with great success. Most authors are also pretty approachable on irc or the mailing lists. In other words, I'm a young amateur programmer, who picked common lisp less than two years ago, I'm barely competent as a programmer, and I could figure this all out, why can't expert programmers with years of hacking behind them do the same, I can't understand.

PS as for the ugliness of the hyperspec, I'm happy if it scares away people who get impressed by "modern" design, more than substance and content. I usually read it in text only browsers, and curse docs I can't read well from within emacs. Alt-tabbing to a browser and having to use the mouse while hacking ruins my day :)


"I don't think anyone with actual first-hand knowledge of the CL ecosystem would make this comment"

I'll make that statement, but only in a somewhat pedantic sense about the language standard itself. Yes, I know CLTL2 is over a thousand pages long, but the one thing about programming languages that always bothers me is undefined behavior. Sadly, even Common Lisp has undefined behavior; Paul Graham's book On Lisp actually relies on undefined behavior being implemented in a particular way e.g. using setq on an unbound variable.

Good documentation should not leave anything undefined; good documentation should leave no questions about the semantics of a system/API/language. Given source code and documentation, it should be possible (perhaps difficult, but possible) for a person to figure out what will happen if the program is run.


The undefined behavior in the Common Lisp specification was a design choice. It has nothing to do with 'good' or 'bad' documentation. Common Lisp was designed from 1981 on. The Lisp community (here the successors of Maclisp) went to explore a lot of new ground: implementations on mini computers, mainframes, super computers, parallel machines, microprocessors, stack architectures, on top of other programming languages like C, etc. There were small and stupid compilers, optimizing compilers, whole-program compilers, ... It was a time of experimentation. At that time a lot of behavior was not defined to allow compiler writes to explorer different types of implementations. With the knowledge of a decade later some of that could have been defined more precisely, but then the standardization process ran out of steam due to lack of funding and interest.

Using SETQ on an undefined variable may be undefined in the Common Lisp standard, but every implementation deals with that and allows it. That's a non-issue. There are a lot of other things which isn't defined in the standard and which all implementations deal with. Garbage Collection. Also not in the standard, but every user expects garbge to be collected.

It would be nice to have less undefined behavior, but it has nothing to do with good or bad. It was useful in a time of experimenting.

Even worse for Common Lisp was that in 1981 there wasn't yet an OOP system for Lisp that was explored and accepted. This way CLtL1 was defined without an object system and it had later to be added (CLOS).


Seriously, this is a great point. One of the big problems with a small community of programmers who all think of themselves as elite, is that tasks like documentation and tutorial writing go by the wayside. This comment should be a wake-up call to any smart and far-sighted individuals who want to promote a language.


Is there a open source Common Lisp library that you'd like to see better documented?

Also, if you're (or anyone else is) interested in working on a Common Lisp environment tutorial, shoot me an email. I've been working on one but haven't launched it yet.


> Is there a open source Common Lisp library that you'd like to see better documented?

I was talking as a long time member of a different programming community with similar attributes. (Smalltalk - since 1998)


I would love to see cffi, opengl and one of the gtk binding libs have better documentation.


...And cffi is one of the better documented libraries, I have used it quite a bit and have had very little trouble working it out.


The opengl bindings don't really need documentation, as they're automatically generated to mirror the C API. Really, if you want to know how to use `cl-opengl`, you memorize a simple naming convention and the rest is a matter of "how do I use OpenGL," not "how do I use cl-opengl?"


That's great, but I haven't learned C. Besides that, leveraging documentation from another language is a hack. As it stands, there isn't documentation around for learning to create 3d graphics without needing to know other languages first.


Are you trolling? First off, learning C well enough to understand opengl documentation will take you much less time that understanding opengl it self(since you'll only need to read and not write C). You have a problem, people gave you a solution, having separate cl docs for opengl that have THE SAME contents, but using lisp function names instead of C ones(as it was explained, it is simple function to transform one into the other in your head). What the hell do you want?

PS I forgot why I stopped commenting on HN threads about lisp. If you read the old usenet, this kind of discussion has been plaguing lisp since day one. People complain, and when given the solution, they just start bitching about something else, avoiding the solution. There are no docs, now the docs are in C, what else? As I've become part of the lisp community, I learned the proper way to fix this, If I encounter deficiency in the lisp ecosystem, I deal with it if I can, and If I can't I politely ask the author/maintainer if they have time to work on the issue. If they don't have the time, I simply accept reality, and move on with my problems. Bitching on HN solves nothing. Honestly, why lisp? Why do people have this supreme need to express their dissatisfaction with a language, with libraries developed by humans in their spare time away from their lives, for free. Be fucking happy, and help out if you care about lisp, otherwise, just shut up!


Why are you angry? It look like you've decided that if people complain about something, those people must be broken and not the system.

And why do people complain about Lisp? It's because the syntax is amazing. It's an incredible language to write in. It is more expressive than anything anyone has ever used, but then they can't do anything with that because it takes so much more effort with lisp to do things that are taken for granted in other languages. GUI libraries and frameworks are so easy with C based languages, because they contain powerful tools that I don't have to write myself.


  GUI libraries and frameworks are so easy with C based languages, because they contain powerful tools that I don't have to write myself.
But you DO have to write them your self. Thats just how open source works. Open source isn't about getting free stuff. People had to write those tools, and they did, because they cared. Lisp isn't so great because McCarty made it great, it's great because for 50 years people worked fucking hard to make it great. Standardizing common lisp took 10 years and a shit ton of money. Every library took months or years out of someones life. You get all that for FREE. If you really care about lisp, you will actually work to make it better. You will be one of those who spend their time and money on it.

Being one of the countless people complaining on forums impresses no one, only makes your ego feel better, doesn't actually help at all. In fact, it's hurtful to lisp, because curious people might read your comments and conclude that lisp isn't worth it for them, and for some of them, it might be, it certainly was for me. So that is why I'm angry. I'm actually trying to help you and lisp, not trying to insult people. If you care about lisp, you'll make it better, if you don't, just do something else, and be happy with it, if lisp can't make you happy. Complaining is a non solution, and I dislike non solutions.


It's NOT a hack because the bindings are calling directly into C. CFFI is a way for lisp to directly run C functions, and a CFFI wrapper does just that: for every function that exists in the OpenGL API, the same function (with the same arguments) exists in the CFFI OpenGL bindings.

Duplicating documentation needlessly is a hack. If your library that you program uses another library, are you required to document that other library as well as your own? No, that library already has documentation.

If you want to learn OpenGL, learn a tiny bit of C. You know what? Learn a tiny bit of C either way. It's the foundation at the core of every program written, no matter what language. Plus, OpenGL is a lot more complicated than C, so you've got your work cut out for you if you can't handle C.


Thankyou for changing my perspective. I've used a fair few high level programming languages, but it's time to get closer to the metal.


And where do I find this "simple naming convention"? I tried google and found repos to the project, but nothing that told me how to read the C documentation and translate it into LISP.


I've never used cl-opengl, but the usual nameing convention for lisp names is as follows:

  name-with-multiple-words

  *global-variable*
so GLOBAL_VARIABLE and NameWithMultipleWords or name_with_multiple_words are easy to figure out. Any body who has used lisp for more than a few days knows how to name things, and since the combinations of naming schemes in C is finite, figuring this out is close to common sense. so glLoadIdentity() in lisp is (gl:load-identity).

Happy Hacking!


I don't know CFFI well enough to take a stab at it. The github manual has a lot of words in it, afaict. :-)

However, I've been longing for getting one of the GTK bindings working for me; when I do that I'll contribute back some docs.


Yep. They have time to draw elaborate cartoons, but not elaborate documentation! Funny how that works.


The author (Conrad Barski) wrote a book: Land of Lisp ...


They're alive and well for Racket and Clojure.

Here's a racket guide, which provides detailed documentation for every facet of the language: http://docs.racket-lang.org/


This is why after trying to learn LISP I ended up with Racket.


The comic references clojure in multiple sections (multicore, lazy), I think you'll find the tutorial and documentation guilds still alive and well there.


Actually my snide remark came from trying to find docs for Clojure libraries. :) Clojure Docs (http://clojuredocs.org/) is alright for the standard library, API documentation (http://clojure.github.com/clojure/) not so nice. But finding complete and updated docs for Compojure or any other of Clojure's web frameworks is impossible. Ironically, Noir (http://www.webnoir.org/) seem to be the best documented of the bunch but it has been deprecated.


I love lisp. I use/used it to build my web service product[1] and anything else I can.

=how do I do X?=

java: something similar to X is already done in Y. add abstract class and redo X and write Y. +200 LOC

lisp: something similar to X is already done in Y. realize you can generalize X and Y into a new pattern and use it for ABC too. -70 LOC

=there is a bug in function X=

java: open X.java. edit line. restart program. X seems to be working correctly.

lisp: two hours later, ah ha! I understand this code. fix X. write unit test. eval unit test. X works correctly.

[1] demo: https://a.keeptherecords.com/demo, source: https://github.com/ThomasHintz/keep-the-records


look I like lisp as much as the next guy, but you can "spend two hours" reading java code, write a unit test; eval said test and then woohoo! You can do that in basically any modern language.

And your first example is something that most modern languages are sold on. The truthiness of those statements can be debated, but your examples are flawed to say the least.

Point is: you can generalize and write unit tests in any language.


That is true to a degree but you really can not generalize very well at all in java. You will also be hard pressed to find java code written functionally enough that you can just run a unit test and know it is working correctly. Usually you have to instantiate many other classes and stub in test data via something like guice. There are many places for those bugs to hide in that type of setup.


I'm foraying into the lisp land for the first time with SICP, and the one thing that bothers me the most is that Scheme is so hard to debug. As a Python programmer who is used to just used to print debugging, this becomes very frustrating very early on.


What's preventing you from print debugging in Scheme? For convenience, I almost always define something like this:

    (define (println . args)
      (for-each (lambda (arg)
                  (display arg)
                  (display " "))
                args)
      (newline))


With the deepest reverence to John McCarthy, I regret to say that Lisp is our cosmological constant. It creates a static universe that is otherwise expanding; it reduces the problem to a solvable one and then declares victory. The truth is, it's all state. All of it.

Remember that movie "Boy in the Plastic Bubble," about a boy with an immune system deficiency who lives instead a hermetically-sealed, sterile bubble? That's not the answer. We instead need to create immune systems (read: robust systems) rather than simply avoiding them.

[I know I'm simplifying things, and this is not meant to be a slight.]


I attended TechMesh 2012 in London. The creators of Haskell were there, and there was a number of other Haskell talks. They were almost all about how to model state, I/O, etc. One of the Haskell creators also created Quickcheck, a generative testing tool where tests are written in Haskell. He talked about how he found state related bugs in Riak. Rich Hickey, author of Clojure, was there, he keynoted about state. I'd argue that the whole point of Clojure is to manage state. Specifically, have a model for how to work with the passing of time.

In other words: functional programming and Lisp does not translate to ignoring state.


[aside] that looks like it was a very interesting conference. i can't find much related info. will it be annual? are they spreading to other places? who was behind it? [edit:] it seems to be an indie thing http://techmeshconf.com/techmesh-london-2012/contact/


They're redoing it this year, and considering doing one in New York this year as well. I think the organizers is a loose collection of Erlang companies (Trifork, Erlang Solutions, probably more). IIRC they organized an Erlang conference and decided to relaunch it as a general FP conference.

It was _really_ good, you should definitely check it out :)


How to model a state is a meaningless question without asking state of what.

Same is true for I/O. You must start from a protocol.

Most of people probably underestimate how many brilliant people contribute to what we could call a modern Lisp.

With all respect, Clojure is nowhere near in terms of sanity, consistency and uniformity.

I don't event want to argue about Haskell. Seems like words "monad" is the same as words "chakra" or "dharma" in a popular culture.))


> Seems like words "monad" is the same as words "chakra" or "dharma" in a popular culture.

Are there any more interesting concepts you stubbornly dismiss?


> With all respect, Closure is nowhere near in terms of sanity, consistency and uniformity.

It's Clojure http://clojure.org/

Closure is Google's js library and tools https://developers.google.com/closure/


Yeah, thank you. It was a typo..


Ir you are talking about syntax uniformity, lisp is not very regular also http://xahlee.info/UnixResource_dir/writ/lisp_problems.html

Although I agree that clojure has is warts. For example clojurescript uses the numerical stack of JavaScript. So you get different and unexpected results than when running on the JVM


lisp is much more regular in syntax than most popular languages, and Xah's page that you point to amounts to little more than saying "I don't know lisp well, so I find various aspects of the language - backquote, quote, unquote-splicing, etc. - confusing."


If you have issues related to the syntax uniformity of certain Lisp dialects, please speak for yourself rather then linking to that troll Xah Lee.

Xah Lee recommends that developers subjugate themselves to Wolfram Research which controls the proprietary program Mathematica.

Even though some Lisp dialects have a couple of quoting operators like ' and ` at least they don't have infix operators like + and * that Mathematica has. Besides, a few quote operators is no basis to say that Lisp syntax is "not very regular."


One of Clojure's goals is to interop well with its host platform, which it does in both cases. I'm not sure I'd count that trade off as a wart.


I'm sorry but what you're saying is immediately obvious. No one in the functional programming world is laboring under the impression that state doesn't exist.

The real issue is whether you think it's worth separating out code that can be reasoned about in a mathematical fashion from stateful code that has to deal with mutating concerns. Functional programming is where research on improving abstractions to better allow this separation of concerns, which is pretty much the opposite of a clumsy hermetically-sealed bubble rolling around and knocking lamps over, etc.


I'm not sure what I said is obvious because you seem to have missed my point. I didn't say that "state exists". I said, "It's all state." I'm saying the notion of "separating out code that can be reasoned about in a mathematical fashion," while a successful strategy in the past, is inherently crippled by that division.


How it exactly is it a "crippled vision"? There are tons and tons of code that I write every day that I'm able to separate out like this. Honestly there is way more code that you can reason about in this way then not reason about in this way. This isn't some academic exercise, people right functional code that operate in this way and do a lot of useful things everyday.


For the record, I also did not think whatever you were saying was obvious, but if your point is that understanding programs by "separating out code that can be reasoned about in a mathematical fashion" is no longer a viable strategy, I must take issue with your statement. Can you provide an example of a side effect free function that can't be reasoned about mathematically? I'm sure they exist, but they seem to me to be a kind of exotic beast. Or maybe I am just not familiar with your domain of expertise?


What makes you think that stateful code can't be reasoned about in a mathematical fashion?


Well, there's certainly a popular conception among program analysis and formal methods researchers that side effects make analysis much more difficult.


Lisp is a multi-paradigm language that's perfectly capable of representing state. Perhaps you're confusing it with some purely functional language like Haskell.


... of course Haskell is perfectly capable of representing state too ...


I'm not saying it can't represent state; I'm saying that it promotes avoidance of state as a virtue.


One of the reasons avoiding state is promoted is to make it easier to write robust software. State is too often the enemy of robustness.


And robustness is often the static plastic bubble that hides in itself, never accomplishing much in the real world.

State is too often the real world.

Mind you, I like robustness. It's like the cinder blocks I use to build my house, each robust and contained in itself. I just can't disregard state when stacking my cinder blocks on top of each other; the flexing and distribution of load is an emergent state that I have to consider when building each cinder block.


I translate your first paragraph to "I have never used Haskell" in my head.

The idea that Haskell is somehow about not doing anything because it can't do state is just silly. It doesn't pass the smell test. People do not write web frameworks or compilers or anything in a language that "can't do anything" because it can't do state.

Haskell is not about "not touching state". That's just objectively wrong. It may not work exactly like you are used to, you may not like the tradeoffs, but Haskell can do things. I know, because I can make it do things.


So to be clear here I assume you are talking about mutable state. Even if that is the case, it is not about avoiding state at all. It is about managing it explicitly.


I would say that it is.

Mutable state in software is like moving parts in hardware. It's often necessary, but it makes everything a bit more fragile.


Avoidance of state in general is a pretty ridiculous claim to make. Even when writing pure functions, the state consists of all of the arguments to all of the functions up the stack.


So because there is state you want to have a messy pile of it. Many lisps have more advanced system for states then most other languages.

Common Lisp has a better object system then any other in the world. Clojure has a time and state managment system.


I rate Moose (http://moose.perl.org) up there with CLOS when it comes to power, extensibility and features.

However saying these are the best OOP systems in the world is still very subjective because I do have a fondness for the simplicity and elegance of Prototype based OO especially as implemented in languages like in Io, Rebol & Self.


You're making an extremely vague but bombastic claim here. I don't think you're intentionally trolling, but it operates on a very similar level.

What does this mean in practical terms? What are you specifically criticizing? It's obvious that you're taking some kind of dig at functional programming, but what is the actual critique or logical argument you're attempting to make here? It is very unclear.


Funny you should say that: just yesterday Reddit informed me that Lisp is 100% imperative because it has a loop keyword and allows destructive assignment.


Well, "Lisp" is a language family rather than a specific language. Although it's culturally drawn toward functional programming, Common Lisp is more or less an imperative language with first-class functions, much like Python is. Its designers chose this approach because they wanted it to be a "big tent" language that imposes few opinions on the programmer. Other Lisps like Clojure and Racket went in a different direction.


"The truth is, it's all state. All of it."

The problem is not state. The problem is being able to recreate the state: both for testing and for business purposes.

The bigger problem is that most programmers, like you, are knee-jerking before the issue of "recreating the state" and declaring that "It cannot be done, because it's all state".

And hence we have both languages who are build by considering mutability to be a virtue and back-end databases (your typical CRUD SQL DB) build upon the same false premises.

I'll give you one example: monday morning the service desk calls because one user of your app, at 3:07pm on friday, experienced a bug.

Can you "recreate the state" your application was in at that time as to be able to figure out what triggered the bug?

You probably can't. Because your DB has changed meanwhile: you're SOL because the 'U' and the 'D' in CRUD are destructive. It's mutability. It's the ennemy of determinism.

So now you're stuck calling your DB admin asking for a dump of the prod DB on last thursday evening and a dump of the log of the transactions that happened on friday... And you're spending hours and hours trying to recreate the state the environment was in when the sht hit the fan. And you may or may not be able to do it.

It's just one hypotethical scenario but things like that is the daily lot of many* programmers.

But software development, in many cases, shouldn't be that painful. If you were to use a CRA DB (Create Read Append) and languages favoring immutability and a more functional approach overall, you'd have a much much easier time recreating the state.

If you think of it, it's all a gigantic determistic machine.

So why can't we accept that the notion of time is an important one?

Why can't you realize that the battle the likes of Rich Hickey are fighting are worth it?

It is possible to use programming languages and DBs (or wrappers like Datomic in front of SQL DBs) that do definitely make it easier to reason about programs and that make it just so much easier to recreate the state and to query the past (which has a lot of business value).

Why do you react like this: "The truth is, it's all state". Saying that as if nothing could be done and as if every single programmer's life should be Java/C# + ORM + XML + SQL hell?

There are people trying to make our life as devs easier. Why not try to listen to what their saying?

Is Rich Hickey "seeing things" with Clojure + Datomic?

To me the combination of a functional language (or at least a language that can be used in a mostly functional way) and a CRA DB (Create Read Append) which incorporates the notion of time from the start is a godsend to our industry.

Why do you close your eyes?


A CRA DB is essentially a SQL database with temporal capability.

Even if you have perfect time-warping capabilities, there are still big challenges to solving bugs with that kind of analysis. The time the bug is noticed may be long after the root cause, and working backward could be a very slow process.

That's why SQL has declarative constraints, to try to make the error noticed at a time closer to the root cause.

Would I like full time-warping capabilities sometimes? Of course. But it's just one more option, and a very expensive one at that. To be any use at all, we'd need to be able to warp backwards through the entire OS and its scheduling decisions (not just a single process), because many difficult bugs involve race conditions.

Really, we need better temporal capabilities, and more declarative constraints (that are immune from race conditions, like a UNIQUE constraint), better ways of avoiding race conditions, etc.; and they all need to work together.

That's why I have worked on temporal capabilities[1] as well as declarative constraints [2] (immune from races) in Postgres. Also, you might be interested in truly serializable transactions in postgres, which eliminate race conditions between transactions without blocking[3].

I feel like the programming language community should work more with the database community. From the article, they mention how restarts can help recover from errors encountered after part of the state has already been changed. But that kind of recovery is taken for granted with the atomic nature of transactions in a database. Database theory is largely about detecting, containing, mitigating, recovering from, and preventing errors[4].

[1] http://www.postgresql.org/docs/9.2/static/rangetypes.html [2] http://www.postgresql.org/docs/current/static/sql-createtabl... [3] http://drkp.net/drkp/papers/ssi-vldb12.pdf [4] http://thoughts.davisjeff.com/2009/12/23/good-error-recovery...


Try googling around for "cqrs". There are many ideas on how to do exactly this effectively.

All it takes is to swap "write" operation with "event" entity (all is state). Combine that with intention revealing commands (customer_died instead of delete from customers where id=1) and you hit jackpot.

End result is nice stream of events which is ultimate source of truth and enables crazy queries you never thought you would ever need http://abdullin.com/journal/2010/6/3/time-machines-should-su...


This is quite interesting. In practice though, the lags between command/query models when the datastores are different(almost always, for large data) becomes a real concern


Pardon my pedantry, but CQRS doesn't necessitate eventual consistency. Denormalization of domain events into can happen synchronously. It all depends on the needs of the project. If the project information architecture and processes can support a degree of latency (which is implicit in almost any technology providing static screens anyway), then the performance gains of eventual consistency can be realized.

A good, nuanced explanation of CQRS is here: http://codeofrob.com/entries/cqrs-is-too-complicated.html


To me state is what makes lisp hard, every time I look at lisp code (scheme via guile) I need to interpret it in my head to figure out the state. And certainly the set! doesn't help.

Lisp puts a heavy burden on the individual programmer, and its benefits can be achieved with better planning and engineering with the standard C++ toolkit. Yet, it is not easy to achieve the advantages of C++, such as performance, parallel execution, low level with lisp.

Lisp is nice for the few who have small projects, big brains, and little time, but by tightly coupling the problem to the code (with no performance benefit) it creates a unstructured nonsense that nobody but the author can understand.


Could you provide examples of CRA Databases. ?

I'm not familiar withe the concept and google and wikipedia haven't helped.

Thanks.


Why can't you stop making rhetorical arguments? Please, just say why you think something is good without attacking people who have a different point of view. Reading the sort of polemic above is exhausting and confusing.


> The truth is, it's all state. All of it.

Lisp represents mutable state the same way Python and C++ do.


Wow, that went from a supposed linkbait title to a surprisingly amusing and interesting explanation of Lisp


Thank you. Admittedly, I intentionally worded it to look like one :)


Was that with the intent of baiting clicks, or as satire?


Neither - I cannot measure how many times it gets clicked, since it's not my website, and I didn't really mean it in the satirical sense.

I guess I just wanted to see how many points it collects. So far it has exceeded my wildest expectations, although that is mainly due to the content, not the title.


In the Explanation on Brevity Guild Micro Fighter, you use "i" as the variable, but the code snippet uses "n". I only found this because I was having fun following along! :)


That's not my comic, unfortunately :)

You'd better let the website owners know.


> although that is mainly due to the content, not the title

I'd beg to differ.


I had Haskell as the subject in my intro course at university (my first experience with functional programming). I've tinkered a bit with it since then as well, and I'm at the "somewhat intuitive grasp of monad transformers" state.

I tried Clojure some week ago through the Clojure koans that were posted here. Compared to Haskell I found the syntax very obtuse and it was not obvious why Lisp would be more powerful than Haskell. The bare-bones syntax felt more like a "proof of concept" than an actual strength.

(of course, the koans took less than a day to do so I'm not dismissing Clojure because they didn't impress me, but I got the idea that the koans were an attempt to showcase Clojure's strengths)


> I found the syntax very obtuse and it was not obvious why Lisp would be more powerful than Haskell.

The easiest answer to that is, there's no syntax, which makes reimplementing your own Lisp parser trivial (one single function call, in fact). This in turn makes reimplementing your own Lisp, inline in Lisp, similarly trivial.

That sounds like something out of an SICP exercise - and it is - but the same properties make it easier to reason about your code in a way that's far more abstract than other languages let you. Once you start treating your code as data that other parts of your code can manipulate, you can start refactoring in ways that simply aren't possible in other languages. This explanation by a well-known Perl programmer explains it very well: http://lists.warhead.org.uk/pipermail/iwe/2005-July/000130.h...

Another way I've heard it (this may have been pg, but I can't remember), is that, when writing idiomatic Lisp, you first think, "What language would make solving this problem really easy?" and then proceed to implement that language.


Have you got a good example where Lisp macros would work but standard Haskell would "fail"?

I've done some compiler work in Haskell like writing programs (semantic actions) as algebraic data structures and then transforming or interpreting the tree. Would I gain something by using Lisp macros in this instance?


Many of Lisp's uses of macros can be replicated in Haskell by taking advantange of lazy evaluation. This is enough for most needs I've ever had, but some things still need the macro system that Template Haskell provides. Deriving lenses from type declarations is one of the up-to-the minute uses.


Have you got a good example where Lisp macros would work but standard Haskell would "fail"?

http://people.csail.mit.edu/gregs/ll1-discuss-archive-html/m...


If pg then perhaps it was (or stems from) this: Programming Bottom-Up - http://www.paulgraham.com/progbot.html


> In Lisp, you don't just write your program down toward the language, you also build the language up toward your program. As you're writing a program you may think "I wish Lisp had such-and-such an operator." So you go and write it. Afterward you realize that using the new operator would simplify the design of another part of the program, and so on.

That's close, but not quite the wording I remember, so I must have seen it somewhere else as well. That's a pretty good description of what I was referring to, though.


I was (re-)reading On Lisp yesterday and came across two further variations by pg:

In preface:

As well as writing their programs down toward the language, experienced Lisp programmers build the language up toward their programs.

Then later in 1.2 "Programming Bottom-up", pg rephrases that statement:

In Lisp, you don’t just write your program down toward the language, you also build the language up toward your program


>it was not obvious why Lisp would be more powerful than Haskell. The bare-bones syntax felt more like a "proof of concept" than an actual strength.

The answer is that Lisp code is written with Lisp data (the property known as homoiconicity). Why is this an advantage? Because it greatly simplifies metaprogramming; Lisp macros are simply functions that operate on Lisp data structures.


"Compared to Haskell I found the syntax very obtuse and it was not obvious why Lisp would be more powerful than Haskell."

Language "power" is ill-defined, and basically means "good".

If you think that homoiconicity is good, then you probably like lisp. If you think that referential transparency is good, then you probably like haskell. Those are mutually exclusive, so one language can't really have both. But either one could be considered "powerful" and either one could be said to help prevent bugs.


How is it that referential transparency (the ability to swap a reference with it's value: no hidden inputs) and homoiconicity (both the source and the resulting syntax tree sharing the same structure) are mutually exclusive?

What homoiconicity offers is the macro system, the ability to operate on the syntax tree as a regular language data structure.

The difference is a language like Haskell enforces referential transparency where in a lisp it is up to the developer whether or not a function will be referentially transparent.


"How is it that referential transparency..."

That was incorrect, I meant: "the kinds of metaprogramming associated with homoiconicity are mutually exclusive with referential transparency".

A macro could not, for instance, take a variable name as an argument and return a result that's based on the value of that variable and maintain referential transparency. So that would be a pretty weak macro system.

I suppose there may be other uses for homoiconicity, but I don't know enough about lisp to comment on that.


Maybe I don't understand referential transparency, but why not? `or` is a macro in Clojure:

(def a 10) (def b 20) (or a b) => 20

Is `or` not referentially transparent?


(please excuse my poor knowledge of lisp... check out the wikipedia page if my explanation falls apart: http://en.wikipedia.org/wiki/Referential_transparency_%28com...)

My clojure interpreter gives "10" not "20".

The function "or" is referentially transparent, because if you call it with the same argument values you are going to get the same result every time. (or 10 20) is the same as 10.

However, the following function is not referentially transparent:

  (defn incx [] (do (def x (+ x 1)) x))
Because subsequent calls return different results:

  (def x 2)
  (incx)
  (incx)
Things like side effects and many kinds of metaprogramming break referential transparency.


It seems to me that what you've demonstrated is that side effects are possible in Clojure, esp. with sufficiently obfuscated (read: un-idiomatic) code. But you're working too hard anyway; Clojure explicitly admits mutability already, through various concurrency pieces like atoms and refs.

That said, if any admission of mutability is sufficient to disqualify a language from claiming to encourage or support referential transparency, then Haskell fails the test, too. unsafePerformIO is a trivial example. I'm sure if you were determined to introduce nondeterminism, you could find a lot more. But that's not the point, is it?

Maybe there's a way to demonstrate your point, but what you've shown here doesn't involve homoiconicity or macros at all.


I should have known that my poor knowledge of lisp would not be excused on HN ;-)

Anyway, I tried learning enough clojure to prove that routine metaprogramming would often be not referentially transparent.

First, note from the wikipedia page that "referentially transparent" essentially means that the same function with the same arguments will always produce the same result, and that you can call it more (throwing away the result) or fewer (replacing calls with the result) times without affecting the meaning of the program.

So, mutating variables (assignment) violates it, as does reading mutable variables other than the arguments. So, using "def" on a variable that already has a value is not referentially transparent.

But let me show you something closer to what I had in mind when I said that referential tranparency and metaprogramming are essentially inconsistent.

Take the simple macro:

  (defmacro swapargs [x] 
    (list (nth x 0) (nth x 2) (nth x 1)))
First, I'll show that the argument that it takes must be the code itself, rather than the result of evaluating the code. The "or" macro before made it hard to tell whether it was taking the code as arguments or the result of evaluating the code as arguments. But with the macro above, it's easy to see:

  (swapargs (mod 7 5)) => 5
  (swapargs (mod 10 8)) => 8
If we are trying to show that swapargs is referentially transparent, we assume that it will return the same result given the same arguments. Because (mod 7 5) = (mod 10 8), then we know it must not be taking the result as an argument (because the result of the mod is 2 in both cases, but the result of swapargs is different). So it's taking the code itself.

Next, we show that give the same code-as-an-argument, it may return different results in different contexts. That's easy to show:

  (defn foo [a b] (swapargs (mod a b)))
  (foo 7 5)
  (foo 10 8)
Now, we can't replace the call to swapargs with its result, because it changes depending on the values of "a" and "b" at the time, even though the argument is always the same code "(mod a b)".

So, this kind of advanced metaprogramming doesn't seem compatible with referential transparency. Perhaps some subsets are, but I don't even think the C macro system could be supported in a referentially-transparent way.

I also think that kind of metaprogramming tends to defeat many kinds of static analysis, such as advanced type systems. I'm less sure of that one, but for practical purposes now it seems to be true.

So, I think lisp and haskell are close to local maximums for their particular philosophies, but neither one is any kind of epitome of programming or "more powerful" than the other.

Personally, I think lisp-style metaprogramming is very cool, and I am happy I spent a few minutes trying out clojure. However, I don't think it is solving a problem that is very important to me at a practical level. I am trying to learn haskell because it is trying to solve the kinds of problems that I actually have -- mainly software engineering problems (greater confidence in code, more readability and maintainability). Not sure whether it will help solve those problems for me, but they are trying very hard to do so, and make some pretty compelling arguments.


I admit I didn't have enough time to try your code, so excuse me if I made some oversight, but it seems to me that the basic oversight that you made is that there are two steps that clojure compiler does with the code. In the first step, all macros are expanded, replacing all macro calls with the actual code they generate. Then, the resulting, "macro-free" code is compiled to bytecode. So, you can try macro calls as many times as you want: all calls with the same arguments (which are clojure forms, that is - code, and not the actual values of that code in the runtime) result in exactly the same code. 5s 8s and all the data from your example does not exist yet in that moment. Then, the transparency of functional calls is as advertised: it is referentially transparent if your code is pure (no Java calls etc.). I think the misunderstanding was in forgetting that macros are expanded and gone long before your code starts to run...


I am more interested in what the human sees than the compiler. And to a human, it does not look referentially transparent.


OK, I'll try to be more precise this time, so I hope that you reconsider at least a few things:

First, I think that the main misunderstanding is, as you mentioned, your somewhat incomplete experience of Clojure.

Second, perhaps there is a tiny bit of a strawman fallacy in your argument. Why? Well, def is not for defining "variables". It does get mentioned many times in Clojure literature: these are vars, you can change them, but they are NOT there for the regular data, but to enable you to give names to your functions and the rest of clojure artefacts. The programmer does not need to use metaprogramming or macros to shoot himself in the foot using vars as "variables". How does that relate to the wikipedia definition of referential transparency? Well, only PURE functions are referentially transparent. Def is not pure, and it is not even a function - it is a special form. Of course, it could be confusing for a novice, but it has been clearly stated many times in the literature how and why, so to anyone ho has learned the basics well and is not trying to recreate Java/python/Ruby/C coding style in Clojure, that should not be a source of problems.

As for your macro example, the first part of your argument, "First, I'll show that the argument that it takes must be the code itself, rather than the result of evaluating the code" - that is something macros are for, so the programmer expects exactly what you described, although you did not need a macro for swapping arguments in a function call.

I hope that we agree that, if we know that there are two phases in Cloure compilation process, a "macro" phase and a "regular code" phase, then everything is fine and clear. I understand that your complaint is that the programmer might forget that so she might be confused in the examples that you stated, so I will continue with that assumption in mind.

Let's try referential transparency with the function foo:

1) Is the function foo referentially transparent?

user=> (defn foo [a b] (swapargs (mod a b))) #'user/foo

user=> (foo 7 5)

5

user=> (foo 10 8)

8

user=> (foo 10 8)

8

user=> (foo 10 8)

8

user=> (foo 7 5)

5

As we can see, "the same function (foo) with the same arguments will always produce the same result, and that you can call it more (throwing away the result) or fewer (replacing calls with the result) times without affecting the meaning of the program".

2) Is swapargs macro referentially transparent?

user=> (swapargs (mod 7 5))

5

user=> (swapargs (mod 7 5))

5

user=> (swapargs (mod 7 5))

5

user=> (swapargs (mod 10 8))

8

user=> (swapargs (mod 10 8))

8

Is "the same function (swapargs) with the same arguments will always produce the same result, and that you can call it more (throwing away the result) or fewer (replacing calls with the result) times without affecting the meaning of the program"? Well, is swapargs a function? -No, but let's forget that for a sake of being fair. The proper way to resolve this is to read the documentation of swapargs. And the documentation would say that the argument to swapargs is a clojure form (something like (mod 7 5)), not a number (something like the result of calling (mod 7 5)). If we having that in mind, it is clear that swapargs is also referentially transparent, with regards to its argumens. We CAN replace swapargs with its result, s you can see in the aforementioned code. If we want our argument so be the results of (mod 7 5) we would use a function, not a macro!

With the rest of your post, I agree. Macros are not superpowerful, you can shoot yourself in the foot with them (and every lisp book warns you about that in many ways), and haskell is awasome in many ways. There could be many pitfals with clojure and macros, but I think these pitfalls are not of the kind that your examples show :)


OK, I agree now, thank you for the detailed explanation.

To summarize, you are saying that in my example, the programmer would almost certainly notice that it's a macro; at which point he would know to look at the documentation and it would still look like a referentially-transparent macro.


As presented, the example macro doesn't demonstrate referential transparency, but then again, the example appears to be incorrect.

The issue is invoking "(swapargs (mod 7 5))". I tried this in the Scheme REPL (Chicken to be precise), in which the macro was defined:

  (define-syntax swapargs
    (syntax-rules ()
      ((_ ls) (list (list-ref ls 0) (list-ref ls 2)
                    (list-ref ls 1)))))

  (swapargs '(a b c)) => (a c b)
  (swapargs (swapargs '(a b c))) => (a b c)
In other words, the macro does show referential transparency.

However, the following doesn't work:

  (swapargs (modulo 7 5)) => Error: (list-tail) bad   
                             argument type: 2
The problem is the argument is evaluated first, and "2" is not a list. (The macro requires a list-of-3 argument.)

  (modulo 7 5) => 2
Probably, what was intended was like this:

  (swapargs '(module 7 5)) => (modulo 5 7)
And again:

  (swapargs (swapargs '(module 7 5))) => (module 7 5)
  (eval (swapargs '(module 7 5))) => 5
  (eval (swapargs (swapargs '(module 7 5)))) => 2
  (eval (swapargs (swapargs '(module 10 8)))) => 2
It looks like confusion between literal and evaluable lists prompted the wrong conclusion, but in this case it's simple to rectify.

Of course, macros in Scheme/Lisp can easily become convoluted and bug-ridden as much as any code, even aside from arguments about the virtues of "hygienic" vs. "unhygienic" systems. Properly constructed, macros remain an essential feature of Lisp/Scheme languages.

BTW, if we're comparing qualities of programming languages, here's a real-life example showing the particular merit of Scheme. I took on the task of creating a complex application (a web server supporting multiple hosts) and decided to write it primarily in Scheme (and some C). The first version was up and running in less than half a year.

Inevitably, months after the project was deployed changes were necessary. Despite the length of time since last seen, the code wasn't obscure to me, it was easy to understand and pick up where I'd left off before. Definitely different from prior experiences.

The crux is getting a good grasp on its core, macrology perhaps among the harder parts. But understood, Scheme allows enhanced productivity, as I've known it more so than other languages "under load" in parallel situations.


Thank you for the detailed reply.

When you say my example is incorrect, do you mean that it's incorrect in clojure, or only in scheme? I tried my examples in clojure and they appear to work and appear to demonstrate a lack of referential transparency. I assume that clojure is a valid lisp to make a point about metaprogramming and macros.

Also, it looks like it's fairly easy in scheme to show the same thing, which it looks like you started to do (I'm not sure whether you agree with me about that or not):

  (define-syntax swapargs
    (syntax-rules ()
      ((_ ls) (list (list-ref ls 0) (list-ref ls 2)
                    (list-ref ls 1)))))
  
  (define a 7)
  (define b 5)
  (eval (swapargs '(modulo a b))) => 5
  (define a 10)
  (define b 8)
  (eval (swapargs '(modulo a b))) => 8
The two calls to "eval" are identical, yet return different results. That breaks referential transparency.

"showing the particular merit of Scheme"

From what I know, I like lisps of various flavors. I just said that they didn't really speak to the kinds of problems that I deal with. Maybe if I wrote more lisp I would see why it does so, but currently I do not.


  > The two calls to "eval" are identical, yet return
  > different results. That breaks referential transparency.
You are right, I don't know much about the syntax of clojure, but the Scheme version works as I'd expect. Yes, the eval calls return different results, but then again, we'd expect to compute a different output for different inputs.

What I tried to show is that calling (swapargs (swapargs '(a b c))) will always return the original list, that is, demonstrates referential transparency. In the case of '(modulo a b), result of evaluation returns a the same result when repeatedly given the same a, b inputs.

The point of the macro was exchanging the second and third elements of the input list. Naturally for the modulo operation, the order of the inputs is significant, and exchanging the operands will give the "opposite" remainder as the result.

In your example the two calls to (eval ...) are not identical and the "different results" are perfectly correct, without implications for referential transparency.

Don't know what kind of applications you might have in mind, but of course, no PL is optimum in all domains. For the kinds of programs I've tackled, Lisp/Scheme has been a good fit. Or maybe it has to do with the way my brain works just as much as the purposes I am applying the language to. That wouldn't surprise me a bit.


"lisp it is up to the developer whether or not a function will be referentially transparent"

Although that may compromise the ability of the compiler to detect mistakes even when the developer writes all referentially-transparent functions. Not sure about that, but I suspect that it does in practice (currently) even if not theoretically inherent.


> Those are mutually exclusive, so one language can't really have both.

There are Lisp computer algebra systems like Maxima and Axiom that can be used for advanced term rewriting operations. Referential transparency, which is replacing terms with their definition, is just a very simple term rewriting operation.


> If you think that homoiconicity is good, then you probably like lisp. If you think that referential transparency is good, then you probably like haskell. Those are mutually exclusive, so one language can't really have both.

This mutual exclusivity is a strange claim that I have never encountered before (but I'm not a computer scientist). Could you explain it?


I gave some more technical details in another reply. It's one of those things I thought for a long time, and I didn't think was very controversial, but I suppose it is.

Really, what I meant was the metaprogramming associated with homoiconicity can't be done in a useful and referentially transparent way. That's because "eval"-like things are not referentially transparent.

However, there are other uses for homoiconicity aside from executing the code in an eval-like way. For instance, a text editor might use that property to inspect and transform the code being edited more easily. Also, I'm sure there are plenty of times it just makes things easier.

I think metaprogramming is also at odds with static analysis (e.g. a good type system), simply because the code itself is not static enough to analyze.


> Compared to Haskell I found the syntax very obtuse and it was not obvious why Lisp would be more powerful than Haskell.

What do you mean by "obtuse"? If by that you mean difficult to understand, Haskell's syntax is far harder to understand because it has operator precedence tables. Even if you manage to memorize the language's built in operator precedence levels developers can define their own operators with custom associativity and precedence.

I will take the uniformity of Lisp syntax rather then the complicated, incomprehensible, and hard to parse syntax of Haskell any day. Lisp syntax is so much simpler then syntactic languages like Haskell that you can even build a simple reversible parser that converts textual code to and from list structures with whitespace metadata: http://lisp-ai.blogspot.com/2012/09/reversible-lisp-reader.h....


I would advise checking out the problems at http://www.4clojure.com/ over the koans. You will definitely find some challenging problems. The nice thing about these is that you can look at how other users all solving the problems, which helps to convey a sense of style. I haven't done a problem in awhile, but I oftentimes gotten something out of looking at solutions from 'amalloy' and 'chouser' in particular. Of course, you can also check out my own ('gajomi') solutions :).


Thank you. I will check it out when I get some spare time.


Do you go to Yale?? Haskell school of music?


I am a intermediate programmer with just over one year experience.

Everywhere I read about the power of lisp and really want to use it. If it is so good why ain't it is used more?

It is very easy to get sites running using ASP .NET, wordpress, RoR or Django. I have worked on production sites using the first two. And personally tried on small projects on the last two.

Is there a way to use Lisp professionally?


> If it is so good why ain't it is used more?

* Weird syntax (for most people).

* No free implementations existed during a key period (80s, 90s) so no initial traction, no useful libraries and killer apps which would pull the whole ecosystem. Implementations didnt even exist for commodity hardware.

* The commercial implementations cost too much, so they suffocated the ecosystem. People preferred coding for free in C or Perl, than paying an arm and a leg for Lisp. So they wrote all the useful libs in C, Perl, Java and Python instead of Lisp.

* No canonical implementation, late and incomplete standardisation, which led to extreme fragmentation, which further killed off the growth of the ecosystem. Instead of writing useful libraries, Lispers wasted effort writing 1001 incompatible implementations of the same basic system.

So to summarize, I'd say the Lisp ecosystem is _still_ suffering the consequences of the bad strategic decisions made 30-40 years ago.

But it is slowly but steadily healing and improving, especially the last few years. It has a high-quality free implementation with SBCL [1], consolidated CPAN-like library management with Quicklisp [2] and a IDE with Emacs-based SLIME [3]. Everything is getting better.

[1] http://www.sbcl.org/

[2] http://www.quicklisp.org/beta/

[3] http://common-lisp.net/project/slime/


To take the reverse perspective, why it is coming up now, I think that in the 80s and 90s, the increase in processing power came in the form of faster processors. Then, pretty suddenly, really, over the past decade, that trend hit a wall, and instead we're getting more cores, but at the same speed. This rekindled the interest in concurrent programming, and Lisps have a distinctive edge in that space.


Add in the flame wars and jerks on Usenet that crapped on many folks that were interested. People went off in search of friendly enivironments and ended up in C, Perl and Python...


> * No free implementations existed during a key period (80s, 90s) so no initial traction, no useful libraries and killer apps which would pull the whole ecosystem. Implementations didnt even exist for commodity hardware.

Emacs LISP (OK, a limited dialect) was available and so was CMUCL (full implementation), which I believe was used for teaching in 1992 when I first got in contact with LISP at our uni ...

Also, back then (80's and 90's) most people still paid an arm and a leg for C, Modula and Pascal on their platforms, so that can't have been an issue. My take is that LISP implementations were too slow to justify their use for most people over faster compiled languages. Whether you paid for the language or not, you expected to be able to get the most out of your hardware.


> faster compiled languages

Lisp is a compiled language.

For that matter, it's a damn fast one, too. The Lisp implementation of PCREs are actually faster than Perl's, by some benchmarks.

I don't want to start a tangent about benchmarks and their relevance, but it's clear that Lisp performance isn't a limiting factor.


Whenever Lisp's history is mentioned we get another free replay of this classic "who's on first" bit:

A: Lisp didn't succeed in part because it was slow

B: What? Lisp isn't slow!

Do you see the problem? No, it isn't slow now, but it was slow and a resource hog and that is a legitimate variable that may have negatively affected uptake during key points in its history. Times have changed, implementations have improved, resources have become less scarce, but the past is still the past.


C is still faster at common tasks and back then, code from readily available C and Pascal compilers was much faster than CMUCL or ELISP (both had a bytecode interpreter only AFAIR). My point is that in the 80s and 90s, computers were much slower and a factor of 2 was a big deal then, especially for professional developers who had to write well-performing applications, though nowdays a good language is "fast enough" if it's only half as fast as C.


cmucl has native code generation for a VERY long time.


I used LISP for several AI classes in the early 90s. My final big class project could do things that were impressive compared to my (non-AI) programs in C++ -- but debugging was absolute hell, because relatively trivial changes would cause the Unix workstation I was working on to run out of stack space running my code. I never used LISP again after that.


> IDE with Emacs-based SLIME

That's also not new. Common Lisp has an Emacs IDE since before the dark ages. It was called ILISP. Every Lisp + Emacs user was using it. Well, Franz had/has their own Emacs interface called ELI.


> No free implementations existed during a key period

BS.

CMUCL. AKCL. CLISP.


OK, let me correct myself: No competitive free implementation existed able to take a leading position and bootstrap the ecosystem, like gcc, cpython, perl and javac did for their respective language ecosystems.

I did not intend to imply that nothing. existed. whatsoever. cmucl, gcl (akcl) and clisp even today are insignificant also-rans and basically unmaintained abandonware.


CMUCL was from the start very significant.

DEC Common Lisp was based on CMUCL. LispWorks was based on CMUCL. Scieneer Common Lisp is also based on CMUCL.

SBCL is a fork of CMUCL. SBCL is very popular in the 'free software' Lisp community - it's just a repackaged CMUCL.

Lot's of other Lisp implementations took and still are taking code from CMUCL, since it is 'Public Domain'. Free software.

Btw., CMUCL still has monthly releases.

AKCL spawned several implementations. Including GNU Common Lisp (GCL), which was widely used for some time - in combination with GCC.

GCL has been long used to run Maxima, the free version of Macsyma.

AKCL/GCL is nowadays ECL. Another fork. Which is maintained until today. Again ECL is possible because GCL was Free Software.


The paradox of choice.

One implementation that's good for 80% of users will gain more traction than 10 implementations where each user has to figure out which one to use.

Lisp attracts maximizers, while satisficers are more successful at delivering software to real people.


> No competitive free implementation existed able to take a leading position and bootstrap the ecosystem, like gcc, cpython, perl and javac did for their respective language ecosystems.

it was never a goal in the Lisp community to develop a single unified or leading implementation. This has nothing to do with 'free software' or not.


"it was never a goal in the Lisp community to develop a single unified or leading implementation"

...and that's one reason it hasn't ever achieved critical mass: it's so easy to write software that only works on one implementation that the various CLs effectively compete with each other on the same level that each is competing with other rapid development languages. It wasn't CL vs Python vs PHP, but instead CMUCL vs Python vs SBCL vs Lispworks vs PHP vs Allegro. Scheme has a similar problem. These are the problems that come from having a standard instead of a canonical implementation, in my opinion. Canonical implementations promote growth in a way that a standard for a language doesn't.


Yeah, and Python competes against Ruby against Perl against Javascript against Python 3 against Perl 6. Without sharing any code. And you win almost nothing. CPython, Perl, Ruby, ... are all slow scripting languages. With the newest implementation, Ruby, being the slowest programming language of all.

CL implementations can share a lot of code.

I've just compiled Mark Tarver's Shen in another Lisp by just changing less than ten lines of code. It took me ten minutes.


I've struggled with various Scheme programs not working in different implementations (mostly code from books like Lisp in Small Pieces) and have heard plenty about the fragmented Scheme landscape, but is this really such a big problem with Common Lisp? And is/was that really a big reason why Lisp has failed to catch on? (being genuine here, I have no idea really about the history and have only use CL a bit)

It seems to me that there are plenty of more popular languages with a similar glut of implementations: there are multiple JVM vendors, multiple implementations of Java (Sun/Oracle and now Dalvik, ...), many C and C++ compilers, etc...

This is total speculation, but it is perhaps because things like threads and networking were not part of the Common Lisp standard that there are fragmentation issues that impeded adoption? Or do you still think that canonical implementation vs language standard is that much better, in almost all cases?


In 2002 I was trying to use free Lisps in production and finding it untenable due to serious bugs in fundamental areas like network sockets, and terrible support for threads. I was also surprised that implementors didn't seem to take these issues seriously.

It was very disappointing compared to my experience using commercial Lisps like Allegro Common Lisp and Macintosh Common Lisp in grad school in the mid 90s. They weren't perfect, but MCL on a machine with lots of memory was a pleasure I will always remember.


I was paid to develop in Lisp (in a research environment) from '89 to '95 and from what I recall the commercial Lisp environments were way better than the free implementations - at least on the hardware we used (Sun 3s, Sun 4s and the DEC Alphas).


That's still true - for various criteria.


    > If it is so good why ain't it is used more?
My guess is that it's hard to create well-gelled lisp programming teams.

If you're using Java or C#, there are certain ways of going about things, and there's general consensus about this. You can pick up Java code that is written by someone you've never met and have a pretty good idea of being able to work out what's going on.

Something similar with python: whitespace forces a lot of style, and it's backed up by the style guide.

Whereas lisp is completely open-ended. Want to run a large system built entirely from lists? Fine. Want to build an object system tinted by your weird philosophies? Fine. Want to create sophisticated self-modifying code? Fine.

It's a worse version of a problem you have with C++, finding a common subset of things that the team will stick to (or only hiring gurus who are interested in all the quirks and who know the history).

Style arguments within teams are draining, but you need to have commonality to be able to work as a team.

Clojure has some edge here for weird reasons. It discourages recursive techniques because of limitations in the JVM. And, if you're going to be interacting with libraries from other parts of Java, you probably don't want to be acting as funky as you might be tempted to otherwise.


I'm coming from a Clojure perspective (around a year of using it), so maybe some of this stuff is different with CL, but Clojure is incredibly simple compared to C++ and in my experience there's no problem with everyone simply using the entire language.

I've read a decent amount of Clojure source, since documentation is admittedly a problem with a lot of libraries, but it's the language I've found actually easiest to read. The Clojure way of going about things is to pass around and manipulate simple immutable data structures, which I find easier to understand than large class hierarchies. Macros are usually used for creating DSLs or removing simple boilerplate, which leads to smaller, easier to understand code bases in my experience, rather than implementing custom object systems or something like that.

I don't think Clojure really discourages recursion so much as it lets you avoid using it explicitly by providing a good standard library, but many of the standard lib functions are themselves written recursively.

Anyway this was all in response to why isn't it used more. I don't really have a good answer for that, but a lot of it comes to from people being weirded out from its simple syntax, and also not wanting to learn to think functionally. It is being used though, the most successful Clojure example I can think of off the top of my head is Storm, which is usually billed as a Hadoop for realtime processing, and it's being used at a lot of large companies: http://storm-project.net/


The books I learnt lisp from emphasised recursion early, and were taught from a perspective of emphasising the power of the language and tricks available to programmers. This approach encourages arcana, similar to bit-shifting tricks you'd find in C books like _Hacker's Delight_ (Warren). Through lisp macros (present in Clojure), you can create your own sub-language.

So merely through the absence of this emphasis on expressive power, Clojure is a bit different. And I think that's great. I might be using Clojure at the moment, except I need to be able to make native builds for the project I'm working on, and so I went with Racket.

Yeah, people are weirded out by the syntax as well. Which is sad, because people who are used to it tend to love it. But if this was the only problem, it could have been easily bypassed decades ago with a whitespace alternative. That is, where developers would be able to denote blocks with python-style whitespace and colons as an alternative to parens. Last time I mentioned this someone pointed me to a distribution of arc that already did it.

    > many of the standard lib functions are themselves
    > written recursively
Maybe the JVM has been updated while I wasn't looking and no longer has a problem with deep recursion. What are some stdlib functions that are recursive?


The JVM still doesn't do tail call optimization but Clojure has a special form, recur, that you can only use in tail position and doesn't use consume stack space. An example: https://gist.github.com/4493547 (loop is like let but works with recur).

A simple one from the standard lib is last: https://github.com/clojure/clojure/blob/master/src/clj/cloju...


Here's one bank that is blogging about moving to Clojure: http://www.pitheringabout.com/?p=778

It is used in a few places, but its not common. The reason it isn't used often is because no one bothered to build anything with it. So, if Clojure / Lisp / etc want to make the language popular, you should build something that makes money and tell the world about it. This is what happened with RoR and Django.

I have a CRUD site written in Clojure. All total is around 1,500 LOC. I only have to add in a mailing feature and and one other small operation and take it off the now-deprecated Noir framework. All told, it will still be 1,500 LOC, I think. It takes in quite a bit of information, has multiple "views," dynamically generates HTML, connects to a database, and allows quite a few unique operations, so it's actually very easy to create a website in Clojure, and it can be highly elegant once you get used to the data structures and how to destructure.

The Clojure community has done an excellent job to make the process of building -> deployment very easy, it's just that no one has stepped up to the plate and did anything with the tools, and unfortunately, I fear Lisp will always be a Land of Toys and No Product Shipped.


Do the 1500 LOC include markup, script and css?


I used Twitter Bootstrap for the CSS...

The 1500 LOC is only Clojure. I could probably dump off 300 if I wanted to.


"Everywhere I read about the power of lisp and really want to use it. If it is so good why ain't it is used more?"

In a sense, people are using it more. A lot of features that Lisp pioneered and that used to be radical back in the day (like built-in garbage collection, lambdas, first-class functions, etc) have now been assimilated in to other languages.

As a result, many Lispish things are now pretty mainstream. Not everything about Lisp has caught on, though (yet). In particular, many people are still averse to Lisp's syntax (or lack thereof), and its many parenthesis. Because of this, they're missing out on some of Lisp's most powerful features (lack of syntax and parenthesis are features, as is Lisp's very simple macro system).

Also, a lot of people have a pretty skewed, outdated view of Lisp, often based on rumors of a bad experience with some crippled Lisp they were forced to learn in school.

Very few people who have a very negative impression of Lisp have had much experience with a modern, full-featured implementation of Common Lisp (like SBCL), or Scheme (such as Chicken or Racket).

Those that have quite frequenly describe their experience with them as enlightening and often wish they could use Lisp/Scheme on their day job.


for sure, what folks complain about the most (syntax) is where it's power is.

I've been getting Guile recently and find the very tight integration with C to quite nice. It's like the best of both worlds


When I first learned Lisp, I quickly got to the point where I "don't even see the parenthesis". Shortly after that I learned to love them, and see them (and Lisp's simple syntax as a great strength).

Now extraneous syntax, even in a half-Lisp like Clojure, seems ugly to me. It's a real pity more people don't appreciate the elegant simplicity of Lisp (and especially Scheme).


Just a curiosity -- why do you call Clojure a "half-Lisp?"


Not me, but I would guess it's because, while both Common Lisp (ie. SBCL) and Clojure are both Lisp-like languages per se, Clojure departs further from "pure" Lisp than Common Lisp, in that it replaces lists with things such as vectors in a lot of places (see defn syntax).


It could also be that it is hosted on the JVM.


Geometric Algebra is really cool and powerful and good, why isn't it used more? The best things aren't often the most popular ones. (And in a practical business setting, the best is often the enemy of the good enough as well as the better. You get the circular cause problem because Lisp isn't mainstream, so most programmers will not know it, and will be faced with the decision to learn it or go with what they know.)


    If it is so good why ain't it is used more?
pg has written about this [1], and the main reason he found was that popularity is always self-perpetuating. If one of the languages get a head start with libraries, it's usually easier to develop programs and libraries within this language than other languages. If you know a popular language, you're more likely to have more job opportunities. If you're a manager, you would prefer to be able to replace programmers easily.

That's one of the reasons why Clojure started off on the JVM in the first place: It has libraries and an already thriving ecosystem. In addition, it's a nice bonus for language developers to not have to worry about performance related to GCing, threading, OS-specific differences etc, which the JVM abstracts away.

[1]: http://paulgraham.com/iflisp.html


Clojure is very capable for web development, but getting started is not as clear cut as it is with something like Rails or Django. This is because the Clojure community tends to eschew large frameworks, and instead prefers using smaller, more focused libraries. This provides a lot of flexibility, but is intimidating at first because it requires you to choose your own libraries for routing, db interaction, templating, etc.

Everything is to be built on top of Ring, which is the Clojure equivalent of Rack or WSGI. Compojure is pretty much the standard for routing at the moment. Korma is a very popular DSL for writing SQL. For templating, Hiccup and Enlive are popular, but there are other options as well.

I would start off by getting a decent understanding of Ring and Compojure and building from there.

--

https://github.com/ring-clojure/ring

https://github.com/weavejester/compojure

https://github.com/weavejester/hiccup

https://github.com/cgrand/enlive

http://sqlkorma.com/

http://www.clojure-toolbox.com/ is really helpful for finding appropriate libraries.


I do a lot of web development in Clojure using Noir, Compojure, Hiccup, etc.

I find this setup similar to using Ruby + Sinatra.

I still keep up to speed with Rails versions, etc., but in the last year almost all of my web development has been with either Clojure or Ruby + Sinatra.


> If it is so good why ain't it is used more?

I'm too pessimistic, but there is almost no correlation between quality and frequency as far as programming languages are concerned. Reasons for that are legio.


For many things you won't even know that it is written in some kind of Lisp.

* the first Gulf War in Iraq was won because a Lisp application took care that US soldiers had everything from toilet paper, ammunition to gasoline. Plus it took care that the troops were at the right place.

* the missions of various Telescopes, especially the Hubble Space Telescope are planned with a Lisp-based planner

* American Express has been checking complex business card transactions with a Lisp-based rule system

* many cars (Ford, Jaguar, ...) were designed using a Lisp-based design software developed by Evans & Sutherland

* turbines for various airplanes were designed in Lisp (Boeing, Airbus, ...)

Some of that stuff survived. Some not.

But still today, if you see product descriptions like this, you would not suspect that it is written mainly in Lisp, but it is:

http://www.ptc.com/product/creo-elements-direct/modeling/


>the first Gulf War in Iraq was won because a Lisp application took care that US soldiers had everything from toilet paper, ammunition to gasoline.

Really? Because IIRC it was won because it was fought by superpower against a small country with 1/1000 the military resources.

I'm not talking politics in this comment. What I mean to say is it's another thing to say "Lisp was used in that system" and totally another to say "the war was won because of it".

There is a big possibility that there was absolutely no correlation between what that language that system was written on and the war being won (and is far more likely, anyway: wars have been won, before and after, without Lisp).

Lot's of NASA missions use plain old C and do just fine. Should we say that they succeeded "because of" C?


Basically they had to wait for the Iraq invasion until the Lisp software was ready.

> Because IIRC it was won because it was fought by superpower against a small country with 1/1000 the military resources.

The Lisp software moved the 1000* military resources to Iraq.


Software does not move tanks. It's one part of an overall system that moves tanks. You over-reached on that point; it happens.


Software moves fleets. Coordinates ten of thousands flights. Make sure that hundred thousands soldiers have supplies.

It's called logistics.

It's a new world.

It happened.

It's been said that this single piece of software paid back for all DoD AI research.


Just because supposedly a piece of software performing a critical purpose was written in lisp does not mean lisp won the war.

If the software wasn't written in lisp, it would have been written in any other language.

And if the software wasn't written at all there would have been hundreds of people doing the software's work manually instead.

Did the software help? Probably. It's likely there would have been more screw ups if there was no software. But it takes a big leap to credit the software with winning the war.


But there was no other software. It was a logistics system written in Lisp which moved fleets, troops and supplies.

It was based on a decade of research in various planning software written in Lisp.

> And if the software wasn't written at all there would have been hundreds of people doing the software's work manually instead.

How so? How should it work to move hundred thousands of people with hundreds of thousands different types of things between several continents? In a few months?


  the first Gulf War in Iraq was won because a Lisp application took care that US soldiers had everything from toilet paper, ammunition to gasoline
The word because means that if there was no Lisp application the war would not have been won. This is almost certainly false, even if the Lisp application did make things easier.


without that application, nobody would have been there.


Without that application, there would have been another.


It wasn't.


That "it wasn't" doesn't mean that "it could not have been" (which was the point the other guy made).

This is more than an elementary logical mistake on your part, this is pure crazy.

By the same logic all those COBOL and Java applications that are deployed for something major (from banking to taxation to multi-national logistics) prove that COBOL and Java are as good as Lisp (and irreplaceable at that, seeing that "there wasn't something else deployed in their place").

What app was used in ONE war doesn't matter at all to explain which language is better than another.

Lots of far more crucial operation than the logistics in a minor war held by a superpower to a small third world country, used other languages. For example, NASA missions used assembly and C.

Does that prove anything relate to the suitability of those languages in general?

Quit the BS Lisp-proselytisation with bogus arguments and hand-waiving.


Well, if you go as far as to say that without Lisp nobody would have been there... Is it fair to say that Lisp started the war?


"Thisp aggression will not stand."


LOL! "Coalition for Peace Against LISP".


  > How should it work to move hundred thousands of people
  > with hundreds of thousands different types of things
  > between several continents? In a few months?
Have you considered reading a WWII history book? That was a far more impressive and substantial mobilization to multiple countries spanning multiple continents which was done without the benefit of software. It took more than a few months largely because they had to manufacture equipment and recruit/train personnel from scratch.

Either you're doing one epic troll or you're displaying a staggering ignorance of the ability for militaries going back to the time of the Roman Empire to do significant mobilizations.


May I propose the alternative explanation that you are overstating the case for Lisp and this software, because you just happen to like Lisp? I mean, look at your HN alias.

That particular software could have been written in any bloody language. Logistics is one of the more boring areas of software engineering anyway -- and the majority of it in the world runs in Cobol, Java and similar boring languages, just like most of the banking world runs.

Plus, it's not like the US army haven't made a mess with war logistics. How much did the Iraq war cost to the country again?


In 1991 there was no Java to write an AI-based planning system for military logistics.

You may want to read what the software actually did and how it was developed. That would clear things up a bit for you.

http://en.wikipedia.org/wiki/Dynamic_Analysis_and_Replanning...


Productivity is about language + libraries + frameworks + community + your own familiarity with all of the above. The language is really the least concern when actually delivering features in a timely manner.

Programming is a high-variance activity. It's not that some programmers are 10x faster than others -- it's that an individual programmer may have 10x variance in performance on different days/weeks.

So, the two most effective things a programmer can do to become more productive are: (1) Get enough sleep. (2) Don't write code -- use libraries.

To the extent that Lisp's meta-programming support can help with (2), it's useful, but usually the lack of discoverable, minimalist, production-tested libraries is a worse tradeoff.

Essentially, the other ecosystems are good enough, and the Lisp ecosystem is not great for the bang-out-10-features-today style of most app development.


I am not too sure about Lisp in particular (especially there's at least few dialects of it, afaik) but I am currently looking into Clojure and, apart from bending my brain, tossing it into a rubbish bin then fetching and putting it back in, I like it very much.

Also, there's a Noir [1] framework build on top of it.

[1] http://www.webnoir.org/


Apparently Noir is deprecated in favor of Compojure, according to http://news.ycombinator.com/item?id=5027560.


I was surprised to hear this, but it's true: http://blog.raynes.me/blog/2012/12/13/moving-away-from-noir/


Noir lives on through lib-noir [1]. I also suggest checking out Luminus [2].

[1] https://github.com/noir-clojure/lib-noir

[2] http://www.luminusweb.net/


Here's a good tutorial on how to do web development in Clojure. No, it's not as "batteries included" as Rails, but I found the learning curve pretty gentle.

http://www.vijaykiran.com/2012/01/11/web-application-develop...

Once you have a Ring-based Clojure web-app, you can run it as any other Java Servlet app, eg. in Heroku.



Well if you want to use common lisp to make a web page, Conrad Barski(who drew the OP comic) has a guide here: http://lisperati.com/quick.html


Heads up - that's circa 2004.


> If it is so good why ain't it is used more?

Speaking just for myself, it's because most commonly available distributions (such as SLIME) require you to learn a new editor on top of learning a new language.

I think emacs is great, but it's not my editor of choice, and when a language distribution all but requires you to use emacs, it's going to have a hard time gaining traction with me.

This may have changed over the past few years, but it certainly stopped me from learning Lisp some 10 odd years ago.


There is slime for VI. And Paul Graham and team used VI and clisp to build the store software--no IDE.

If you read Coders At Work http://codersatwork.com/ the grown-ups don't use IDEs.


Ignoring the condescending remark for now, it probably was possible to run lisp without Emacs, but none of the tutorials mentioned how. They all pointed towards Emacs.

If I recall correctly, one tutorial (and several IRC lispers) even stated the following: "How do I use this with vim? Just use Emacs + SLIME, you'll be better off for it."

It's probably changed (at least I certainly hope it has), but it certainly hampered my adoption of lisp.


It didn't change last time I checked - Emacs is still almost mandatory for almost all tutorials on Lisps (I searched for beginner tutorial for Clojure about half a year ago). I find this attitude stupid too. Emacs is a great editor, not to mention operating system, but I just don't like it and I feel that I have the right to do so. It shouldn't be too costly to list the alternatives and help with getting expected functionality in other environments.

This was never really an issue for me though, because I was lucky enough to start my adventure with Lisps from Racket and it's excelent IDE, DrRacket. I then used it with other Lisps, because I was already familiar with it and adding some keywords (well, names - for indentation purposes) was trivial. I didn't use them (other Lisps) long enough to be seriously irritated by the lack of built-in repl, so obviously YMMV - but I would recommend DrRacket as low-entry-barrier alternative to Emacs for Lisps editor - it handles parentheses rather well :)


Good information, thank you.


Well, I work solely with VIM now, but I have worked with Visual C++ 6.0 in the past and later with Zend Studio 5 & 6 version and then with Komodo IDE. I never touched Java, but I briefly came back to Visual Studio when doing some projects in C# and then when learning F#. There's also DrRacket, formerly DrScheme, an IDE for Racket, which I use sometimes and love (I used it instead of Emacs for other Lisps too btw). Ok, to the point: you are wrong.

Take my VIM for example, which I use for Python and (Java|Coffee)Script development (along with Erlang and a few other languages for hobby projects). I have a file list/tree/browser with NERDTree. I have a list of classes and functions in currently opened files with TagBar. I have "go to definition" and "display help/docstring" through Rope, along with four different auto-completion modes (partially built in, enhanced with SuperTab and a few other plugins, including Rope). Similarly I have support for refactoring from it. I have "find in files (quickly)". I have "fuzzy file name matcher" with Command-T. I have three different linters for Python. I have access to git log, blame, status, add and everything else with fugitive and diff is pretty, side-by-side one thanks to vimdiff. I can open remote files easily (this is built in). I have a bar with snippets (with placeholders I can fill in when pasted, of course) with SnipMate. I can open command line (like bash, ipython or coffee) in a window or tab with Conque. I can easily evaluate bits of code thanks to IPython (and thanks to their recent refactor, it's really paying off!). I use zencoding when I have to write HTML structure. And these are just things I use most often.

Now, I use VIM, which, I assume, means to you that I don't use an IDE - but what is the difference between, say, Komodo IDE and my VIM (aside from my vim being able to display in the console) really? It's IDE all the way down to unix shell and all the way up to git integration... And I'm old enough to be called a "grown up". Of course, I assembled my IDE myself from pre-made and some custom components, but it's IDE nonetheless.

The thing is that IDEs are there for a reason. Every single feature I mentioned above is a timesaver to greater or lesser extent. Every one of them increases my productivity in some common situation - and we're talking about Python and back-end (mostly) web-development here, which has much less repetitive, tedious tasks than, say, writing MFC app in C++.

Anyway, if you want to program in Notepad - feel free. Just don't, really don't, try to convince anyone to do the same by saying that notepad is somehow superior to Visual Studio for programming. It isn't. You'll understand this when you have to immediately fix a bug in production code on the server with something like nano, joe or mcedit and you introduce three other errors in trying to do so due to unbalanced parens, messed up indentation and a huge number of other issues that IDEs (yes, VIM included) protect you against. Well, I guess you need to grow up first to get access to that production code, I mean afterwards ;)


Check out Lispjobs. http://lispjobs.wordpress.com


"If it is so good why ain't it is used more?"

You know that Hacker News was created by Paul Graham right?

You definitely do want to read Paul Graham's essays.


I fail to see how this is going to convince anybody who hasn't tried lisp to give it a shot. This cartoon can be summarised as "All languages are accumulating bugs while lisp has some magic X and Y that provide a way around them."

A blub user will think "yeah whatever", IMHO.


My thoughts exactly, but I'd add this: I love and respect Lisp; I've written my fair share of it. There is absolutely nothing inherently magical in Lisp that prevent you from introducing bugs. You can just as easily make a logic error in your code that results in a bug. Bugs happen. Bugs will happen. There is no silver bullet. The only way to reduce bugs is to test thoroughly.

tl;dr: nice cartoon, "lisp means no bugs" == total BS, bugs can totally exist in Lisp code


Yeah, I think the link posted here is doing little to make lisp popular, especially compared for example to a nice tutorial like the Seesaw example one to build GUIs in Clojure [0]. There, you can see how interactive Clojure is, and how fast it can be to develop stuff. Note that I took this example because it's on the top of my mind, but I am sure similar proves of interactivity can be made for other lisps.

[0] https://gist.github.com/1441520


Holy crap, Seesaw looks awesome! See, that's far more powerful - I had no idea that even existed, and now I'm excited to play around with it. This is great!

I don't need a cartoon to sell me on something. I need to see how something can be used.


Thank you so much for that link!. I'd been meaning to check out Seesaw, and it really lives up to its promise of making Java GUI's less painful.


"The only way to reduce bugs is to test thoroughly."

That is false.


No, it's not false, but I'll rephrase it to illustrate my point: "the only economically practical way to reduce bugs is to test thoroughly, unless you're NASA or you have some magnificent budget that somehow lets you hire people who can mathematically prove your code is bug free".

I am aware that you can prove code is correct, from a math POV. That's about the only way you can write "bug free" code without extensive testing, and even then, I'd argue that's not good enough - code needs to be tested. Bugs don't exist just in code. They exist in CPUs. They exist in configuration. They exist in dependency version mishaps that somehow make your code work incorrectly.

There is no silver bullet here. Sorry - there just isn't.


Type systems are, in essence, automated mathematical theorem provers designed to prevent certain classes of bugs.[1] The functional style is also a bug-reducer in that you can reason more accurately about the state of your program, because you have limited state-changing code to particular places. Certain language features like Lisp's restarts or garbage collection are bug-reducing because they give you concise ways of expressing features you might otherwise have had to implement by hand, and every line of code you write is another line for a bug to hide in.

Yes, testing is invaluable and should not be omitted. No, none of these are silver bullets that eliminate the need for testing. But it is false to say that "...the only economically practical way to reduce bugs is to test thoroughly." There are many ways to reduce bugs that can help alongside testing.

[1]: If you've only ever used C++ or Java, this sounds hilariously weak, but in languages like the ML, the type system prevents null pointer exceptions, and in Haskell, the type system goes further and separates effectful and non-effectful code to ensure you don't cause side effects where you don't expect. Even more powerful are languages like ATS, which give you compile-time type errors when you've forgotten to allocate space for a null terminator for strings.


I think what you mean is "any efficient engineering process will make good use of testing".

But if you already have some tests in place, the most efficient way to reduce bugs is often something other than more testing, e.g. code review, design review/improvements, or static analysis (often provided by the language/compiler).

All of those other methods can and will reduce bugs. So your statement that testing is the only way is obviously false in both a practical and theoretical sense.


I see your point. Fair enough :)

Though I'd argue those other methods really boil down to testing. Code review is a peer "test" of code. Design review is a peer test.


> They exist in CPUs. They exist in configuration. They exist in dependency version mishaps [...]

They exist in software assisting with formal methods usage too, but what's worse is that frequently implementation is sound and the program still works incorrectly because of erroneous assumptions or bugs in specification. So no, you cannot be sure that your program is bug free without running it... I'm not sure if it is possible to get to being 100% correct in reality - what with cosmic rays altering memory and such...


I'm missing something; how does LISP enable bug-free programs?


Did you click on any of the blue links in the comic? They contain articles on language features with sections: 'Synopsis', 'How it kills bugs', 'Explanation' and 'Weakness'.


Whoa, hopefully I'm not the only one who missed that. I thought it was just animated text, not a link.


This is the crucial point which the whole discussion seems to have missed.


Great intro. Any comments on "Land of Lisp" as a book for a "want to learn Lisp" journey?


Excellent book to discover and learn Lisp, maybe the best one actually. It covers all aspect of Lisp and provides lot of examples code based on games. This is somehow more fun than the math example of SCIP (even if this is another great book I deeply appreciate too)


Thanks for this -- I've been thinking about taking up SICP but I'm not that math-oriented (I do want to change that, though!) I think that I'll pick this book up first.


If you enjoyed this little taste, you should get the book. It's really fun. There's a lot of code in it, and although it uses games as the subject for the examples, the code is definitely not fluff. It's been a year or two since I read it, but I recall that there's a bunch of graph traversal and search programs and even some low-level HTTP code.


Ruby Rogues podcast discussed it with the author (1hr mp3, transcript available): http://rubyrogues.com/043-rr-book-club-land-of-list-with-con...

Incidentally, the author (Conrad Barski) says in that discussion that Clojure is his favorite Lisp now and he finds it much more elegant than Common Lisp. I have a similar background (CL programming, now doing Clojure) and I agree.


Land of Lisp was how I first got into Lisp - I was hooked already before finishing even a third of it.

For production code, I ended up abandoning Common Lisp for the most part in favor of Racket, simply because of its library support and compatible, scoped dialects, but the principles are the same in either case. I'd actually recommend starting with something less 'batteries included' if you have the patience, because it makes it easier to see the fundamental beauty of the language, without getting distracted by something as "irrelevant" as library support.

Some people say that Land of Lisp makes it hard to develop 'useful' applications afterwards, but for that, I think Practical Common Lisp is a good secondary resource, and it's easy to skim once you've completed Land of Lisp. Land of Lisp is much better as "Lisp Propaganda" - ie, for someone who's interested in giving it a shot, it does a good job of selling the Lisp paradigm as a whole philosophy, even if it doesn't have space to cover all of its practical uses[0].

[0] I won't sell it short, though - it does a good job for that too... several of the games it implements are non-trivial.


Superb. I went on to read the sample chapter, which was equally amusing and interesting. I think I'm going to order the book, despite the fact I can't really think of any practical applications for Lisp in my day job - although who knows.


Most of the time, you don't want to save the world because this presents scaling problems. Instead, save a little corner of the world and be open about how you are doing it. If you do this right, then you will garner lots of imitators. Then if your way of "saving the world" is well documented and robust enough to avoid the "cargo cult" pitfall, you will convince some large part of the world to save itself.

Note the implication: You don't save the world by telling it, "You're doing it wrong." You save the world by getting the world to covet your success.


There is less insane way to personal insights - http://www.paulgraham.com/onlisptext.html


Am I the only one having a hard time reading lisp?

Non-functional programs read like plain English. Particularly Python but I just can't get my head around the functional ones.


> Non-functional programs read like plain English.

It depends partly on what you're used to. You're used to programs specifying instructions (since that's what we think of as an algorithm), so yes, Lisp won't look like "plain English" instructions.

However, functional programs don't define instructions - just relationships. For the most part, they wash their hands off of specifying the details of the execution, in favor of looking only at the high-level, mathematical relationships.

In that sense, Lisp programs like

(define x (map (lambda (x) (* 2 x)) (sort '(2 3 2 1 4))))

is defining a relationship between x and the list '(2 3 2 1 4), the same way that

x = 2y + 3

is defining a relationship between x and y. The latter doesn't tell you how to get the value of x (for a given y value), but defining that relationship is sufficient to understand the concepts.

So, you could say that non-functional programs read like plain English instructions of actions, whereas functional programs read like plain English mathematical relationships.


Imperative programs read like English. Functional ones read like Apache.

The Main difference is, a functional programs mainly talks about what things are, not what they do. For instance:

  // C(++)
  int square(int n)
  {
    return n * n;
  }

  -- Haskell
  square :: Int -> Int
  square n = n * n
The C code reads like a procedure to follow: "return n times n to whoever called you" (I'm anthropomorphising square(), here). The Haskell code reads like a description: "the square of n is n times n".

That was the first major difficulty. Now the second one: functional code is often like reversed imperative code:

  // C(++)
  int compute(int n)
  {
    int x = foo(n);
    int y = bar(x);
    int z = baz(y);
    return z;
  }

  -- Haskell
  compute :: Int -> Int
  compute n = baz (bar (foo n))

  -- alternate definition
  compute n = baz . bar . foo $ n
    where g . f = \x -> g (f x) -- function composition
          f $ x = f x           -- function application
So, in the Haskell code, you see that the data flows from right to left, instead of top to bottom. Like Unix pipes, only reversed.

The final difficulty is getting used to the fact that functions are passed around directly. In C++, Java, or Python, we often pass around objects, who may or may not hold the same methods than the others that where passed around in the same way (that's polymorphism). Subtype polymorphism is neat, but most often, you need it because you want one method to change depending on various factors. A simpler way to do this is pass around the function directly.

That leads to some powerful, though uncommon in the imperative world, idioms. For instance, you can write your own customized loops. Imagine for instance that you want to process lists in Haskell:

                                -- A List is either
  data List a = Empty           -- an empty node,
              | Cons a (List a) -- or a cons cell,
                                -- with an element and a list.
Now let's process the list

  inc-all :: List Int -> List Int
  inc-all Empty      = Empty
  inc-all (Cons e l) = Cons (foo e + 1) (inc-all l)

  dbl-all :: List Int -> List Int
  dbl-all Empty      = Empty
  dbl-all (Cons e l) = Cons (foo e * e) (inc-all l)
See how much they have in common? There's a way to factor out that, with a map:

  map (a -> b) -> List a -> list b
  map f Empty      = Empty
  map f (Cons e l) = Cons (f e) (map f l)
Note that the first argument is a function, hence the (a -> b) between parentheses. Using it is very simple:

  inc-all l = map (λe -> e + 1) l
  dbl-all l = map (λe -> e * e) l
And Haskell can make it even more concise, with what we call partial application. Haskell functions actually have only one argument. Multiple arguments are simulated by having the function return another function. Here:

  add :: Int -> (Int -> Int)
  add x = λy -> (x + y)
Which is the same as:

  add :: Int -> Int -> Int
  add x y = x + y
Or even

  add :: Int -> Int -> Int
  add = λx -> (λy -> (x + y))
So, inc-all and dbl-all above can be written as:

  inc-all = map (λe -> e + 1)
  dbl-all = map (λe -> e * e)
Without those fundamentals, one doesn't stand a chance at understanding real world functional code. It's just too different. It's no harder, though. The main difficulty here is to change your mindset.

Now I haven't talked about macros…


COBOL reads like plain English. The ALGOL-like family of languages not so much.


I have no problem believing in insectoid domination of earth. Bug-free software, Lisp or otherwise, does strain my credulity.


This book seems inspired by _why's poignat guide to ruby book http://mislav.uniqpath.com/poignant-guide/book/chapter-1.htm... Not that that's a bad thing, but I'm just surprised, no one else has mentioned the same.


This looks like a comic from the 70s, photocopied and hanging on the bulletin board in the computer lab.


Typical Lisp propaganda: many brave non lispers are part of the functional, brevity, continuation and DSL guilds. And it chooses to ignore that the biggest battle was won by the type system guild...


I thought Lisp is the pinnacle of programming until I discovered Haskell.


There is a lot of cool stuff out there. Haskell is one of them.


A Fantastic metaphor!


apart from the excellent comic, i loved the insight into how laziness helps fight bugs. i'd never thought about it in those terms before.


who write this crap? i wrote in lisp (scheme), this is the most bugged language i ever used. and i used more then 5.

this is non debugable language, it it makes it bug full. you have to follow complex ideas, and keep scores of ideas, this is not for normal humans.

when you pass the wrong type and not support it, it goes to hell, and as a programmer you start to flame.

list, is a piece of shit for the masses, its useful for handful, who didn't distribute the ideas well, and people when to the place where it was easier to write code, and less is needed for getting your path going.


I enjoy Lisp, but I got to agree with you about scheme debugging. MIT/GNU scheme has to be most unhelpful interpreter I've ever used. Error messages are loud, completely unhelpful, and by the end of it I was convinced the REPL was actively trying to make me feel stupid.


Did you read the manual? http://www.gnu.org/software/mit-scheme/documentation/mit-sch...

It's not as easy as setting breakpoints in an IDE like Eclipse and using a visual debugger there, but it's definitely a workable command-line debugger. You just need a little patience in order learn how to use it, and also probably have a pretty good understanding of Scheme's execution model, but that would be true for debugging any programming language.


> Did you read the manual?

For real?

> It's not as easy as setting breakpoints in an IDE

I don't use an IDE. I'm comparing against command line tools for scripting languages, Haskell and Common Lisp. Some fail more helpfully than others.


Sorry for my tone, I guess YMMV... I just never had any issue with MIT Scheme's debugger, although I admit that CL with SLIME is a bit more useful, especially compared to MIT's weird Emacs-clone (or -fork?) "Edwin".


For a slime like repl for scheme, you might want to check out geiser: http://www.nongnu.org/geiser/

It works great for guile and racket.


Oh, only five other languages? Come back after your twentieth and we'll talk...

Now seriously - in which Scheme did you "write"? Racket has excellent debugger and brilliant contract system and exceptions, not to mention a repl and powerful type system. I'm sure you didn't want to display your lack of experience and knowledge, it just happened; I won't hold it against you, just don't write anything more about the thing you know nothing about.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: