Lisp, more than any other class of language, is what you make of it. Also, it is just as susceptible to the whims of culture as other languages (perhaps MORE so). Take the Clojure world, for example...
Because it is being used by a generation of programmers who cut their teeth on Rails and were frustrated by J2EE verbosity, popular Clojure code tends to be written such that APIs are quite readable and bereft of too much cleverness (the cleverness is usually hidden in the implementation of the API, rather than the interface).
Examples: Compujre, Ring, ClojureQL, Encanter, etc.
S-Expressions don't befuddle people. People befuddle people.
How is it not an obvious oxymoron to call a language "too powerful"? Imagine a physicist or a mathematician calling a theory "too powerful". It explains too much! You can prove too much with it!
We're not even close to understanding how to reliably make good programs, so naturally we don't understand how to do that with Lisp either. Lisp is just a particularly pure medium for programming. Most of the things people are saying about Lisp ("it allows people to invent their own little worlds") are really just statements about programming.
Oh and one other thing. A pox on the terms "readable" and "unreadable" floating around as free variables. They are hopelessly relative and their prominence in any discussion of programming (or should I say every discussion of programming) renders said discussion pointless. We literally don't know what we're talking about.
'Imagine a physicist or a mathematician calling a theory "too powerful". It explains too much! You can prove too much with it!'
Actually, I can perfectly reasonably imagine a physicist complaining that a theory is too powerful. If a theory has the capability to explain any possible observation, it similarly does not have the capability of being falsified...
That seems like a degenerate trivial case. A better objection would be Occam, that a simpler theory is preferable where adequate. But even that breaks down here, because we don't have any adequate "theories", only the hard problem of how to build complex software systems.
I can see refusing an approach on the grounds that it doesn't work, but to refuse it on the grounds that it works too well?
Even worse than "readable" and "unreadable" are "maintainable" and "unmaintainable". At least I know what "readable" means at some level. But "maintainable" usually turns into a generic catch-all term for "this doesn't scare me too much".
It's not an oxymoron because programming languages are tools for accomplishing specific purposes, and power is not the only thing that makes a tool good.
For what it's worth, one can easily call a theory "too powerful" for practical purposes. E.g., one often uses Newtonian mechanics to analyze something rather than Einstein's more powerful approach. Human brainpower is a sadly limited resource, so "easy" often trumps "powerful".
It's funny you should cite limited cognitive capacity, because the same starting point leads me in exactly the opposite direction.
"Human brainpower is limited, so we should use weaker languages" seems a strange argument. The whole point of using more powerful languages is that they can express more with less, allowing us to fit more into limited human RAM. It's because our brains can't manage millions of details that we don't build systems directly out of machine instructions in the first place and resort to cleverer strategies like higher-level languages. At what point along the way does that process stop being valid?
It's for the same reason that we sometimes favor Newton's tools over Einstein's: RAM isn't the only cognitive limit. "Powerful" in the sense you're using is similar to compression. The deeper the compression, the more effort it takes to unpack fully.
Deeper abstractions are only useful in coding to the extent that you don't have to understand the details. But abstractions leak:
As Martin Fowler writes, "Any fool can write code that a computer can understand. Good programmers write code that humans can understand." You might be able to turn 100 lines of Java into 1 line of Lisp, but if it takes your colleague more time to understand and modify the 1 line of Lisp than the 100 lines of Java, then you've made things worse.
The deeper the compression, the more effort it takes to unpack fully.
That implies that humans must mentally compile the code. By definition that's not true of a higher-level language.
Ditto for your invocation of leaky abstraction: by definition, a program written in a language can be understood by learning the language and reading the program. If that's not true, then the program must not really be (fully) written in that language.
The whole reason why we prefer a program in, say, C to a 100x larger program written in assembly language is that it's easier to understand. Given that available evidence points to source code size as the best measure of complexity and best predictor of errors -- there was a recent thread on HN about this with links to several papers -- I think the burden is on you to explain how the benefit of writing smaller programs in more powerful languages holds good up to, say, Java, but breaks down when you move to, say, Lisp or K.
In theory, theory and practice are the same. In practice, they're different.
The higher the language, the more abstract the abstractions. Each symbol represents more, and therefore hides more. There are manipulations on the symbols that are plausible in context but idiotic when one looks at the hidden details. I see that regularly with developers who have only worked in higher-level languages. From their perspective, they will see two approaches to the problem that will be equivalent. However, there will be 3 orders of magnitude difference in performance. By definition (well, your definition) that doesn't matter in a higher level language. In practice, it does.
I don't think I've said that Java is some sort of magic universal middle point in languages (and indeed, I don't think that exists), so I don't accept the burden you're pushing on me. You asked how a language could be too powerful, and I tried to answer your question. If you have more questions, feel free to ask.
Not "pushing on you". Just very interested in this question.
So this is about performance? But that is a separate issue. The claims in the OP are that a language can be "too powerful" because programs written in it, although shorter, are too hard to understand. I'm looking for evidence of this that doesn't simply reduce to "intuitive is just what you've seen before". The interesting thing is that there doesn't seem to be any.
(Performance is a separate issue because it has to do with the gap between abstractions and the machines they have to execute on. I think this may be the Achilles heel of the highly mathematical FP approaches. Still, it's a change of subject to interpret "too powerful" as "too powerful for the machine" rather than "too powerful for the brain".)
The underlying problem seems to be that abstraction is hard. That is, for any complex system, it's hard to come up with a set of concepts that can express the system with less overall complexity. This is true of programming in all languages. The problem is masked in lower-level languages because the abstractive weakness of the tool provides an excuse to produce the usual reams of code and thus the usual runaway complexity. But that doesn't mean the task is easy in a more powerful language. The hardest part -- coming up with a good set of concepts in the first place -- remains. Faced with that, people do sometimes rush to apply the technical devices offered by, say, a Lisp to a not-clear-enough set of concepts. The resulting programs can indeed be hard to understand. I've done it myself.
Performance is not a separate issue because as professional programmers we aren't selecting tools for anything other than their ability to help us get things done for our users. This isn't a question of "too powerful for the machine", it's a question of "too abstract an approach for a team of humans to efficiently understand the implications of the choices their members make".
In that sense, some abstractions are indeed too "powerful" for the problem and the people at hand. When training somebody to work the register at a corner store, one does not start by deriving number theory from Peano's axioms and then telling them to work it out from there. You make sure they understand basic arithmetic through simple practice and then you train them on the specific operations for money, mainly with further practice. For the novice cashier, "power" in the sense you use comes at too high a price with too low a payoff to be worth it.
By the way, what came across as pushy was you insisting I had a burden to justify something I had never said (and incidentally don't believe).
Always lacking from these kinds of debates about code: actual code.
Does anyone have examples of crazy unmaintainable Lisp code we could look at?
On the other side, what examples of powerful / elegant Lisp code do you feel best make the case for Lisp?
I fully realize this is subjective ("no accounting for taste"), and that a handful of anecdotes doesn't really settle anything. Nevertheless, I'm interested in what the failure modes might be for Lisp, and whether they have analogs in the languages I'm more familiar with.
I'm also bothered by a tendency in the Lisp community to say, "Lisp is the best, and I have all this awesome Lisp code, but no, I'm not going to show it to you." If someone asked me for great C, C++, Python, or Perl code, I have favorite examples I'd point them to without hesitation. What gives Lispers? Is your Lisp code so personalized or specific to the problem that you fear it really wouldn't make any sense to an outsider? If so, how come this doesn't translate into maintainability problems?
I don't think my own Lisp code is really worth showing off so I'm going to cheat.
Here's some impressively awesome Lisp code, that leverages features that might be considered "too powerful". It's some Common Lisp from Peter Norvig's Paradigms of AI Programming: http://norvig.com/paip/auxfns.lisp
The memoization stuff is pretty cool and so is the stuff for managing resources (defresource and with-resource).
That's one of the best books on programming I've ever read (I mean started and set aside then picked up again and again for many months and haven't finished), and there's a lot of really nice and elegant code in it. The way he describes searching graphs and trees is really nice but that code doesn't use any of the really Lispy stuff. It would look pretty similar in Python or Ruby. The rest of the code is here: http://norvig.com/paip/README.html
A well engineered lisp(and any other language actually) program isn't a stream of beautiful lines, but a set of components, any of which might have a complicated implementation, but all of which have a good clean interface. Lisp is beautiful not because it allows you to write beautiful 20 line programs, but because it allows you to design large systems that still have a chance to be maintainable, despite the complexity of the problem they are solving.
I can still show you examples of beautiful and horrible 20 line lisp programs, but I'd rather show you examples of large scale design. The most popular might be Emacs. Emacs has a million lines of elisp, imagine if it was written in java or C++, scary thought :)
In a large system like emacs you'll find many examples of beautiful and ugly code, but the overall system is still beautiful and maintainable. This talk by Stuart Halloway might explain what i mean by that: http://vimeo.com/1013263
In a nutshell, emacs is big, but small for its size, meaning that those 1000000 lines of lisp do much more than a million lines of java will ever be able to do. That property comes in part by using a lisp as an implementation language(and not a very good lisp at that :)
Great talk, thanks for the link. The power of emacs isn't immediately obvious, until you try to use it for something it wasn't explicitly designed for.
The main point about lisp is 'there is no accounting for taste'.
Lisp leaves most things up to the programmer's taste.
You can write Lisp that looks like Fortran, or C++/Java, or Scheme.
You can make a DSL that directly models the problem.
You can use objects or no objects, do everything in CLOS or do everything with structs.
You can write your own object system.
You can make it fast or slow, you can use correct data structures or you can do everything with lists.
You can use macros for everything or you can never touch macros ever.
You can make your program one big macro.
None of these things are 'bad taste.'
Most people have different taste from you.
Most people only end up learning the part of the language consistent with the paradigm that they like.
If you only know that piece of the language, you will have difficulty working on someone else's project when they are working in a different paradigm.
None of the individual pieces of the language are particularly difficult.
People complain macros are difficult to understand. Macros are easy. If you can understand a program that concatenates lists to make a new list, you can understand a macro. Macros are quite literally 'just lisp code'.
Are some macros written in a way that you can not personally understand? Most likely, but that is not an issue inherent to macros. You have to be careful about checking inputs, and creating the proper debugging and type checking at macro-expansion time. Just like any other program. There does seem to be a stupid tendency for people to cram an entire macro into a single function. This is foolish, the whole point is that you have the entire power of this lisp runtime. There is no reason to write it like c pre-processor garbage.
So, how does one write good lisp code? Well, one way is to pick some standards. This is as difficult has having someone in charge willing to gently say 'this doesn't really match up with the style of the code around it'. If I've inherited some code, when I do a bug fix, I'm going to do my damnedest to stick with the style that it is written in (unless it is truly tagbody/go awful, in which case I might rewrite).
This kind of turned into a rant, I apologize. I guess my point is that sure, lisp is powerful, but the real issue is the number of options that it provides. At some point, you have to pick a subset and a style, and go with it. And then you have to be comfortable learning if you inherit something that you don't know yet.
Lisp leaves most things up to the programmer's taste.
I don't entirely disagree, but I think there's more to it than that. While the language certainly does offer some free choices, more often there are advantages and disadvantages to each, so that in any particular situation, some choices are better than others. Becoming an expert Lisp programmer requires learning about these tradeoffs, which takes experience and, usually, guidance from existing experts.
That's true of any language, of course, but some of the facilities Lisp offers are rare among other languages, so that people coming to Lisp from some other language are unlikely to have experience with them.
Oh, one point about macros in particular. If you have to resort to reading the implementation of a macro to understand what it does, the person who wrote it screwed up. Macros should always have documentation strings explaning their syntax and semantics. If you find yourself in that situation, the best thing to do is to go to the REPL and use `macroexpand' interactively to see the expansions of the macro calls you're interested in.
I don't entirely disagree, but I think there's more to it than that. While the language certainly does offer some free choices, more often there are advantages and disadvantages to each, so that in any particular situation, some choices are better than others. Becoming an expert Lisp programmer requires learning about these tradeoffs, which takes experience and, usually, guidance from existing experts.
There is a saying 'perfection is the enemy of done.' In any particular situation, there is probably not a solution that is both optimally efficient and also optimally elegant. (This is true in any programming language). If you work towards that goal too much, you will likely miss your deadline.
But the point is, the little advantages and disadvantages don't matter until they do. There isn't going to be a big difference in most programs between using a loop to iterate a sequence, and using map nil with a lambda, and using do* (for example). In fact, whether there is any difference at all in the resultant assembly or byte code will depend entirely on the compiler implementation. It is a style thing... so try to be consistent, and work within what you are comfortable with.
That's true of any language, of course, but some of the facilities Lisp offers are rare among other languages, so that people coming to Lisp from some other language are unlikely to have experience with them.
Currently, the only thing I can really think of that is actually really unique is the macro facility (and that is only because, as soon as a language adopts the macro facility, it becomes a lisp.
As a programmer, the focus of my job is learning new things. I am basically a mechanism for translating the new things that I have learned into computer code. If I can't learn a few measly language features, what good am I going to be as a 'thing I just learned to computer' translator. And like I said, use what you are familiar with, until you are faced with something someone else wrote, or you have the time to learn new things. But don't punt.
If someone doesn't have experience with a given piece of the language that has been used, I expect them to pick up a book or online resource about it (and then, possibly most importantly, play with it). It is not hard, but it does take effort. There is nothing in common lisp that requires genius level intellect. (God knows, I'm certainly not that bright). No one requires that fresh-faced C interns be pointer arithmetic gods, but i'm sure they are expected to learn it if it is part of the job.
Oh, one point about macros in particular. If you have to resort to reading the implementation of a macro to understand what it does, the person who wrote it screwed up. Macros should always have documentation strings explaning their syntax and semantics. If you find yourself in that situation, the best thing to do is to go to the REPL and use `macroexpand' interactively to see the expansions of the macro calls you're interested in.
I'll add that in addition to doc strings for syntax and semantics, there should also be assertions written into the macro about the syntax and semantics. If I am passing a number or list where the macro is expecting a symbol, an error should get thrown during the macro-expansion phase. Macros are programs like anything else. Validating inputs and throwing an error at the earliest possible time is a good rule to go by.
So, macroexpand-1 is a good start, if the macro is implemented correctly, does what its documentation says, and you are simply flubbing the syntax. (Of course, it should be yelling at you for flubbing the syntax).
However, when there is a bug in a macro, the only thing that macroexpand-1 will tell you is that the macro doesn't work. You'll macro-expand it and say 'yup that's the wrong generated code.' It doesn't really tell you anything about how to actually fix the macro unless you are already familiar with the macro's code. Having examples of inputs with bad outputs will aid in pinpointing the problem, but not unless I already understand how the program works.
Macros are lisp programs, and can be as complicated as any arbitrary lisp program. Writing a more complicated macro is not screwing up (I think this is an important distinction to make)... inadequately documenting, explaining, and bulletproofing it is. Someone might have to debug it later, so strive to write readable macro code. It isn't hard as you are just constructing lists and writing normal lisp code with minimal efficiency requirements.
So I guess that was a roundabout way of me saying "I agree, mostly."
I was expecting a self-signed certificate, which I usually accept, but this one shows up as "localhost.localdomain". No thanks, I don't want to trust this cert to sign for my localhost :)
Instead of comparing Lisp with other languages, let's consider the problem stated, of people inventing their own little worlds - i.e. DSLs.
Brooks said that a "programming product" (meaning one that can be used by other people) takes x3 the work of a "program" that works. He talks about documentation, testing, generalization and "can be run, tested, repaired by anyone". I think this means careful API design for usability, discoverability and understandability - not just efficiency - is important.
So, inventing new worlds (DSLs) is not a problem; inventing your own little worlds that are hard to understand and use is a problem. But it takes x3 as much work to do it right, and it usually isn't worth it unless it is explicitly intended to be used by others (e.g. it's a library for sale; or a utility for use within a large organization; or a web API).
Secondly, an example from the history of relational databases. Codd had the idea of relations plus a high level language. He designed a couple of high level languages, but no one liked them. Instead, Boyce and Camberlin came up with SQL (originally "SEQEL"), that was usable by mere mortals. "Since Codd was originally a mathematician (and previously worked on cellular automata), his DML proposals were rigorous and formal, but not necessarily easy for mere mortals to understand." http://webcache.googleusercontent.com/search?oe=utf-8&rl...
Sometimes, designing a DSL for others is so hard that it takes a different person to do it.
This isn't specific to Lisp. It's an issue for designing DSLs and APIs (and languages in general), which can be done in any language. Lispers may invent more often and with more variation, because lisp is more powerful. Power --> Responsibility
Summary: poor abstraction is worse than no abstraction.
In Java, the "DSLs" you tend to get are at the class level, in different files. Like any abstraction, these can be well- or ill-designed. There's some differences to lisp: (1). the abstractions are less flexible/powerful, so there's less to understand; (2). having them in different files makes it harder to grasp the whole than if all in one file (or one screen); (3). the syntax is fixed, so you can at least understand the symbols without understanding anything else.
I think that inventing a new language has the best chance of making something that is a genuinely better solution. But if you want it to be understandable, it's helpful to link it to existing concepts that are already known, understood, and with known modes of use and application, perhaps via metaphor - i.e. adoption through familiarity.
But the general case for adoption seems to be that something must be x10 better (or compelling in some way) for people to go through the pain of adoption, of learning new syntax, new concepts, new ways of working, new infrastructure, new tradeoffs, new gotchas, new shortcuts, new consequences, new policies, new standards, new training, new suppliers, and so on.
If you can reduce the pain of adoption, adoption is more likely.
Put another way: there are two kinds of pain: the pain your solution addresses, and the pain your solution creates. The reason to adopt your solution is to reduce pain, but if the solution itself brings too much pain, it's simply not worth it.
Relational databases are an example of this. The relational concept solved the pain of storage change, but also created pain (difficult to use; x10-x100 slower). As those secondary pains were solved (with SQL; with optimization stategies and Moore's Law), its adoption accelerated.
So, sometimes a fundamental improvement needs to place power and flexibility over ease-of-use - but to be adopted, sufficient ease-of-use is essential. And your abstraction has to be, not just good or better, but x10 better.
Besides the superficial ("parentheses!"), I've heard two major complaints about Lisp. One is that it's just too powerful. The other is that macros don't really let you do anything you can't do with lambdas in other languages, just with (much) easier quoting.
I don't think you can have it both ways. (I'm not saying that any one person is making both of these points, but person B's anti-Lisp argument is arguing against person A's anti-Lisp point.) If you can't use Lisp in a team because somebody might write a macro that you don't understand, how can you deal with Python or Ruby or C# code that inevitably tries to fake it by taking as parameters functions-that-returns-other-functions?
(The other common alternative I see these days is to make your DSLs in XML. That means they're more verbose, you can't step through them in your debugger, and so on -- plus you still have the problem that allowing arbitrary DSLs is too powerful! I suppose there's also a third alternative: don't even try to use higher-level abstraction, and build giant apps out of low-level spaghetti code.)
I've seen perfectly readable code in Lisp, even making extensive use of macros. (There are conventions, and good programmers do follow them.) I've also seen perfectly undecipherable code in every language I've ever seen. When I've watched individuals write code in multiple languages, those that write bad Lisp tend to write bad anything. Nothing that I've seen leads me to believe that there is any significant set of programmers who can only write bad Lisp code, but can write great code in lower-level languages. We can give them Lisp, though, and they will at least write less of it!
You could make the same argument about Unix, but I still say that with great power comes great responsibility. If you can't handle the responsibility maybe you should use watered down tools. That's somewhat bleak though and following that logic you end up with Java. Java is not the worst language on Earth but I doubt it's your favourite and no one writes it w/o a code generating and refactoring IDE.
This is both glib and anecdotal, but Java is one of my favorites and I never write Java with an IDE. It has an extensive standard library that I can count on having available without requiring users to install optional packages, it's easy to write portable code and I can write software for everything from a cheap feature-phone to a high-end server.
There's a lot of horrible Java code on the internet, mostly because there are tons of people writing Java. Nevertheless, in good hands, Java can be quite elegant and succinct.
> Nevertheless, in good hands, Java can be quite elegant and succinct.
Could you elaborate, perhaps with an example? Not trying to start a flamewar or anything - but I honestly can't imagine a case where Java is succinct relative to other languages. And as far as elegance goes, Java is so overly verbose and fixated on classes ('too many classes? there's a class for that!') that I have a hard time thinking of it as elegant.
But then again, maybe I'm just used to reading bad Java code everywhere, and you've been lucky enough to find the good stuff!
It's just the nature of the language. I honestly don't mind recent versions of Java all that much but without an IDE I would lose my mind in about 3 seconds.
And once you start writing something significant or that needs to work cross platform say hello to design patterns to work around the straight jacket.
I'll take Lisp, JavaScript, Ruby, or Python any day of the week. All languages have warts but to me it seems that Java has warts by design.
One of my favorite things to do in java. Save anything to XML, it even handles all the objects your objects are pointing to. All you need to do is make your objects serializable. Then java handles everything else using reflection.
public void saveAll(Object[] objects) {
File file = new File(filename + ".xml");
try {
BufferedOutputStream bos = new BufferedOutputStream(new FileOutputStream(file));
XMLEncoder xenc = new XMLEncoder(bos);
for (Object o : objects) {
xenc.writeObject(o);
}
xenc.close();
} catch (FileNotFoundException e) {
System.err.println("File not found");
} catch (IOException e) {
System.err.println("Some other error");
}
}
C# also gets knocked for verbosity (though it's a bit better than Java). But with manifest typing (which is optional since C# 3.0), judicious use implicit cast, and `dynamic` the language can be made concise and elegant.
An example from my library[1]:
SqlCommandEx sql = "select * from person";
foreach (dynamic p in sql)
{
var full_name = p.first_name + " " + p.last_name;
var age = ((DateTime)p.dob).YearsAgo();
//etc
}
//automatically disposes the connection
I don't know Java, but it's often worth trying this sort of exercise out. I'm open to the idea it might be perfectly possible.
I once took one of Peter Norvig's Python programs (a spelling corrector) and converted it into VB.NET, just to see whether VB was as succinct as Python. Turns out it is:
Or finding out that C# anonymous functions are more concise than Haskell. Thing is, sometimes these things surprise you. Languages do get reputations that are hard to shake.
I would suggest reading Effective Java by Josh Bloch, it has some great tips on how to write code and design APIs as well as nicely structured sample code. I'm a much better developer (not just in Java) for it.
Also browse some of the core Java source code to see really elegant implementations by him.
Hey, that is pretty awesome. I had zero Java experience, but I felt inspired by your post, so I searched "java without ide" and got an XML parser up and running in approximately 20 minutes.
For the curious reader, here's how to do Java entirely from the command line:
1) install the JDK and add its "bin" directory to your PATH. For me this was "jdk1.7.0_02/bin"
- note, this is not my code. It's just a snippet I found from the 'net. Feel free to clean it up. =) My goal was "to get up and running as quickly as possible". Production code would be cleaner than this.
4) run these commands:
javac ReadXMLFile.java
java ReadXMLFile test.xml
And the XML file is parsed!
So why go through all this trouble, when you could just write it in Python?
Deployment comes to mind as one reason to go with this approach; fewer systems have Python installed than Java runtime.
So therefore when you deploy your app, you'd have to either 1) make customers install Python, or 2) (more likely) ship the contents of the "Python27" folder along with your Python app, and invoke your app via a shell script, or something. I'm not sure.
Deployment comes to mind as one reason to go with this approach; fewer systems have Python installed than Java runtime.
I would like to see proof of this. Perhaps Windows has fewer Python installs, but I would find it hard to believe that there are more Linux systems out there with Java (which most distros don't install by default) than python (which more systems do install by default).
You can bundle a Python program up into an executable with things like py2exe and cx_Freeze, and it will do most of the grunt work of bundling up any necessary dependencies (scripts, stdlib stuff, DLL's, etc.) into the .exe.
(It's been a while since I used either of those tools, so there may very well be other alternatives around now.)
At uni we first learned to program java using just the command line and emacs.
Though working on a big project or just simply wanting to get up and running fast a good text editor or an IDE is so nice to have. I hate having write my own getters and setters, or just simple the psvm method.
Yet I have had trouble with having the "right version of Java runtime" installed in order to run programs, especially on Mac OS X where you sometimes cannot upgrade Java without upgrading the OS. I think there is still an installation/compatibility issue to consider. No free lunch.
This is your selection bias speaking. Tons of people LOVE to code in Java or the similar C# for that matter because that's the most suitable tool they have.
C/C++ can be taken to extremes with preprocessor abuse. If you've never looked at the obfuscated C programming contests, then you should. As far as Scala DSL's go, didn't somebody post one the other day that made an ascii art picture of a Christmas tree into a valid scale program? Isn't there also a Scala DSL that lets you make it look like basic?
Sure, there are languages that don't let you get at the meta, but just because the ones that let you do can be abused does not invalidate the usefulness of the notion.
However, it's at least partly taking advantage of the fact that normal Haskell actually looks (in shape, anyhow) vaguely like basic as is. At the very least, Haskell doesn't force you to use parenthesis or braces everywhere.
This is why Lisp is a HackerLanguage instead of a commercial language: hackers are generally loners who don't care if others can figure out their code (at least while they are in the mode or role of hacking). Thus, they build their own little world in it that fits themselves nicely so that they can hack fast, but the rest of the world be damned.
I think calling this a straw man would be generous. Hackers use lisp because they feel it makes code more easily readable. You don't have to agree with them, but at least take time to understand their arguments before you refute them.
For the most part I enjoyed reading this article, however I'm uncomfortable with one assertion:
"This is why Lisp is a HackerLanguage instead of a commercial language: hackers are generally loners who don't care if others can figure out their code (at least while they are in the mode or role of hacking)."
I don't know many hackers who this applies to. Rather, the DRY ethos seems to extend into readability and accessibility - most acknowledge they'll one day pass the code to someone else to maintain and too much complexity makes that near impossible.
Lisp requires a new way of thinking - in recursion, lambdas, mapcars etc. - to write good code which reflects the awesome abilities of Lisp. Unfortunately many people don't grasp it. They don't want to think or to learn superior ways if they just can use a language which makes them able to solve their problem. The way to the solution doesn't matter a lot if the solution itself works.
Btw the same phenomenon happened with Ada. The Ada 95 language is awesome. I admired it, it was real fun to use it. But average programmers are simply overwhelmed. That's the reason why Ada died.
Many people also complain about Unix and Linux but if you take the effort and learn it seriously you will love it.
What awesome abilities are you talking about particularly? Recursion, lambdas, and mapcar are available in just about every modern languge, be it VB, Ruby, JavaScript, or PHP.
Lisp is so different from all other languages that you have to use it to understand it. Just reading about it is not enough.
Note that there are many variants of Lisp. Older Lisp versions are suitable for experts only. For beginners I would recommend Scheme which is a well defined successor of Lisp. I would recommend Racket (http://racket-lang.org/) as SDK. It is suitable for beginners as well as for professional development.
Scheme is a successor to Lisp, the same way motorcycles are a successor to trucks. Despite the syntactic and historic link, they are pretty much different languages(as is clojure). Although scheme is a descent intro to lisp-like languages, and racket is an awesome environment, both for teaching, and probably for actual work(haven't done any in it, so I’m only speculating about this), one piece of advice to those who chose to start with it is to keep in mind that scheme is only one way to look at what lisp is(and IMHO not the most enlightening or useful one), and its important to know what assumptions its creators made about what programming should be like.
As I pointed out in a comment at the beginning of the thread, scheme teaches you some habits that don't translate well to other lisps, so to those thinking of picking it up, be mindful of the assumptions of the language, and when you decide to look into other lisps, don't assume they will be the same there as well.
In fact i would actually recommend learning clojure or common lisp before scheme. I consider both of them to be better languages, but i have my own set of assumptions that might not be shared by others :).
I've had previous dalliances with lisp but it never really struck me as a big enough win to pursue. So it's interesting to see a decent list of distinctive features -- I've gone over it to see what features there are that don't exist in my usual work language, C#. The things off this list which I don't have, couldn't code, and actually want are macros, conditions and restarts, and generic functions. I don't understand MOP enough to comment on, either. I'm pretty sure the rest is either available as libraries, just isn't important enough, or could be coded fairly quickly -- although I'm not sure it'd be wise to try to write common lisp in C#, Greenspun's tenth rule and all. ;)
I do note the PG quote at the bottom of the page -- "the power of Lisp cannot be traced to any single one of them. It is the combination which makes Lisp programming what it is" -- and I can see that unity being very elegant.
But the real standout item is macros. I suppose it's the only unstealable feature. Or rather, those attempts to steal it, like .net DLR Expression Trees, make really ugly code and are never going to be standard programming techniques.
The notion of programming languages being “too powerful” rests on fallacious assumptions.
The faulty reasoning is this: If Hacker Hortense does something with powerful language L that Newbie Nathan finds confusing, we assume that language J won’t permit Hortense to do that, and therefore if we standardize on language J, fewer bad things will happen.
This reasoning is faulty. First off, if Hortense and Nathan don’t see eye to eye on how to write programs, no language will solve the problem, because you have a disparity of experience and/or education. If what you want is code Nathan will like, and you don’t want to raise Nathan to Hortense’s level, you have to drag Hortense down to Nathan’s level with coding standards. And you could have done that in Language L just as easily as J. If you ban L, what will you do when Hortense implements Parser Combinators and Monads in J?
The first flaw is the assumption that language features introduce complexity, when in reality it isn’t the features, it’s the solution to the problem that confuses.
This indirectly leads into the second fault with the reasoning. There is a hidden assumption that Hortense is going to write so much code, and there will be so many “problems” Nathan will have with his code, and every problem we eliminate is one fewer problem in the result. This is not how software works.
Let’s say Hortense writes some code in L, and Nathan complains that there are seventeen things he doesn’t understand. Ten of them rely on features of L, seven are independent of the language. If we tell Hortense to rewrite things in J, we cannot assume that the result will have seven things Nathan doesn’t understand. It might have even more as Hortsense works around J’s deficiencies.
For a demonstration of this, look at any modern Java Dependency Injection Framework. These complex and opaque software machines are made up of of XML, interfaces, and classes. They exist in no little part because Java lacks the reflection and meta-programming of languages like Lisp or even Ruby. Are these thing simple by virtue of being written in Java instead of Lisp?
Imagine for a moment that DI frameworks don’t exist, and that Hortense had built something in Lisp with macros for dependency injection. Nathan is nonplussed, so Hortense rewrites it in Java and rolls her own DI infrastructure. Will the result really have fewer things that Nathan doesn’t understand? Or will it be even more inscrutable thanks to the accidental complexity of working around Java's limits??
The “problem” is not that some languages are too powerful, it is that people imagine that teams of programmers can work together on complex problems without communicating, and some people assume that when there is a disparity of experience of knowledge, that people can work together without educating or learning from each other.
The more interesting situation to me isn't novice vs expert. It's the same one that bothers the person who wrote the top of the original article: pro vs pro.
A former boss of mine said that programmers are like hairdressers: whenever she changes to a new stylist, the first thing they say is, "Oh my god, what did that last person do?" That's certainly what I say when I have to examine an existing codebase. More expressive languages allow for more stylistic variation, and one's style is deeply rooted in one's values and experiences. One person's brilliance is another person's WTF.
I think the right solution to this involves developing a house style, which for my team involves zero code ownership and a lot of collaboration. Eventually the team converges on a set of idioms that work for everybody, so style issues are minimal.
But that's not always the working reality. If I have to inherit a bunch of code from a random developer, I'd much rather it be in Java than, say, Ruby. Java definitely limits brilliance, but it also limits too-clever-by-half idiocy. The downside risk with a Java code base is smaller.
I agree about developing a house style. It is not my Java programming experience that the Java language limits the too-clever-by-half idiocy.
If you want to say that the kinds of companies that hire Java developers don’t hire clever idiots, that’s a different discussion. But I’ve seen so many convoluted architectures built out of Java that it feels to me like too-clever-by-half is a constant across all languages, and the very best thing for me is discouraging cleverness, with second best being let people do their clever things in a language that helps them, rather than making them write even more inscrutable clever code just to get away with being clever in Java.
This is my point above, and perhaps it doesn’t match your experience, namely that some clever things will be done, and Java will not prevent them from being done, but merely prevent them from being done simply and cleanly. So why not have this other “pro” do what she is going to do, but do it simply and cleanly thanks to a powerful language, rather than resorting to XML and other shenanigans?
But honestly, I fear we are now having a weird discussion where our working assumption is inheriting code from people we don’t respect. It’s far, far easier to simply work with reasonable, sane people, than to make all your choices predicated on the idea that your colleagues are psychopaths.
Hmmm. I think I still maintain my preference for bad Java over bad Ruby. The AbstractSingletonProxyFactoryBeanImpl insanity is painful, but I feel like most Java problems can be solved with pruning shears. Ruby's metaprogramming, on the other hand, puts COMEFROM-style functionality into the hands of noobs, and its duck typing makes it very hard to answer basic questions about what the hell X is. In Java, I can always tug on something and figure out what it's connected to. But perhaps that's just a matter of my personal debugging style.
Broadly, though, I think you're right: it's irrelevant to the real problem. Java is the new COBOL because most large software development organizations are functionally insane. They can't tell good code from bad, and rig things in a way that bad code is almost guaranteed anyhow. So naturally they'd prefer a language in which bad code is merely painful, not fatal. Because that lets them cover up their failings for much longer periods.
My personal solution is yours: stop working with other people's shitty code. I'd like to see the problem solved across the industry, but focusing on language choice doesn't seem like it's getting us anywhere.
I'm not so sure I agree with you, because there is something to be said against your counterexample. In java, I can reason as follows: the Java Dependency Injection Framework is gigantic and convoluted, therefore it is going to be complex and difficult to manage.
In Java there is a sort of ceiling, one might call it an upper bound on the mindfuckery per square inch of code. In 100 dense lines of lisp, there is no a priori reason to assume that no one has written a recursive descent macro-expanding s-expression handler for arbitrary routing of network packets in a rules-based DSL. In the same amount of dense java code there are rarely more than 200 method calls, all statically checked, and rarely more than 100 variables, typed and scoped.
From John Carmack's recent article on static checking: "If you have a large enough codebase, any class of error that is syntactically legal probably exists there." Now, he is concerned with actual defects, but the same rules apply even more strongly for stylistic rules. As the size of a codebase increases, the probability that any particular language feature is absent approaches zero. And in a language like lisp, where you can write your own language features, this is moderately horrifying.
The fact that you can replicate java's dependency injection in a few orders of magnitude less code in lisp is not a comfort to me. Because the 10,000 lines of code in Java's Dependency Injection Framework is a red flag to me. The chances that someone who writes the same thing in lisp has drastically simplified the implementation are not so high.
the 10,000 lines of code in Java's Dependency Injection Framework is a red flag to me.
We agree.
The chances that someone who writes the same thing in lisp has drastically simplified the implementation are not so high.
We disagree.
First, when I roll my own, I scratch only my own itch. I don’t need to build something that works for everyone, everywhere. It’s like Microsoft Word: MSFT boasts that most users only need 5% of what it does, but every niche of users uses a different 5% of the whole thing.
But can I roll my own? Well, I suggest that the answer is more likely to be “yes” in Lisp than in Java. First, folklore suggests that defects are constant per line of code. Therefore, if I need fewer lines of Lisp than of Java, I should have fewer defects to contend with. I assume three things. First, I need much less than the full framework’s functionality. Second, Lisp is more expressive than Java, so I need fewer lines of Lisp for any functionality than of Java. Third, I suggest that languages with meta-programming support are particularly well suited for tasks like dependency injection, reducing the amount of code I need to write even further.
Now, the big DI framework is written by someone else. So is it free? No. I need time to learn it, time to use it, I can make mistakes using it, I can get an XML configuration wrong, I can implement an interface when I am supposed to extend an abstract class, I am not immune from defects just because I am using a library.
So the net question for me is whether the chances of successfully rolling my own feature in Lisp for my project’s specific needs are greater than the chances of successfully using an existing framework in the Java world=, and parallel to that, the question is whether someone else working with my code will find it easier to decipher the XML configuration and interfaces and classes I have written to work with a Java DI framework or will find it easier to work with the smaller, simpler and more compact Lisp code written for this specific project.
Reasonable people can go either way on this, I find it hard to believe that Java is “obviously” a win, especially if they’ve used one of these big frameworks with the many gotchas (as I have).
p.s. Of course, the wild card is that there are plenty of libraries in Lisp as well. I reject the notion that every Lisp programmer reinvents everything from scratch: https://github.com/lprefontaine/Boing
Sorry, I didn't see that this actually posted. What I intended to say is that a less verbose expression is not necessarily easier to understand. The article is really about the balance between elegance (or performance) and accessibility. An expert coder working with a powerful language like Lisp can implement a lot of functionality very quickly, but there is a point where less skilled programmers working with common understanding of a less flexible language can implement more functionality more quickly, simply because there are more of them working in parallel.
Strictly speaking yes, it's a hypothesis. But the fact that the programming ecosystem looks as it does constitutes some evidence in favor of that hypothesis. Were it otherwise, you'd expect the professional programming world to be economically dominated by Lisp and a handful of super-programmers. Yet that isn't what we see. Why not?
That's the stock objection. Here's my answer: historically speaking, we've barely started. Software is the first mass endeavor of its kind that humans have tried. It belongs to a post-industrial era that can be expected to take a long time to work itself out. Under such conditions, social proof doesn't work. Whatever the rational way of making software turns out to be, statistically speaking it hasn't been tried yet.
Will it turn out to be "Lisp and a handful of super-programmers"? I don't know. What we need is an age of experimentation. The great thing is that startup costs are now so low that we are beginning to see that happen. Emphasis on beginning.
That argument seems a little too convenient; we are after all talking about a field (and a language, Lisp) that's been around for over 50 years. I could certainly see pockets of inefficiency persisting after such a time, but I would hardly expect the exception to be the rule at this point.
Keep in mind that I'm only suggesting that a crossover point exists, I don't pretend to know where exactly it is. In order for me to be wrong, a single superior programmer would always have to be better than two slightly inferior programmers working with a slightly less expressive language. I strongly doubt that this is true. The simplest explanation for what we observe is that in fact a team of inferior programmers working in parallel can be more efficient than a single superior programmer working alone. Not always, but often enough to prevent more expressive but less comprehensible languages from becoming dominant. What constitutes "expressive" and "comprehensible" will evolve over time, as you suggest (maybe Lisp will someday become tomorrow's Java!), but the underlying scaling law will remain.
This is a fascinating conversation. I've always had trouble working in teams, so I'd like to believe that superior programmers will out in the end. Or at least that they will in a few problem domains.
But I wonder if this is wishful thinking, if this isn't just another case of the prisoner's dilemma. Perhaps like how cities with mostly poor people would collaborate many times in history to conquer neighboring barbarians, even though the barbarians had more freedom and were thus richer. (See http://en.wikipedia.org/wiki/Fates_of_Nations.)
Then again, there's reason for hope. Perhaps the parallelizable sort of programming is more menial. It certainly seems that way with the way communication costs overtake large teams. It's almost like Vernor Vinge's zones of thought (http://en.wikipedia.org/wiki/A_Fire_Upon_the_Deep, http://www.youtube.com/watch?v=xcPcpF2M27c) - as your team grows bigger you can just watch the members grow dumber in front of your eyes as more and more of their cognitive effort is eaten up by internal communication, leaving less and less for externally-useful work. If this is true, there's hope that advances in programming will automate the low-cognition tasks and allow programmers to focus on the high-cognition ones, leveling the playing field for small, high-cohesion teams.
---
Me, I've been obsessed with something raganwald said when he spawned this tendril of conversation: exercising explicit control over the space of inputs my program cares about. My current hypothesis: eliminate fixed interfaces, version numbers, and notions of backwards compatibility. All these are like petridishes of sugar syrup for code to breed more code. Replace them with with unit tests. Lots of them[1]. If I rely on some code you wrote, and I want to pull in some of your recent changes, I need to rerun my tests to ensure you didn't change an interface. Programming this way is less reassuring, but I think it empowers programmers where abstraction boundaries impose mental blocks. Great programmers take charge of their entire stack, so let's do more of that. I'm hoping this is the way to prove small teams can outdo large ones.
[1] Including tests for performance, throughput, availability. This is the hard part. But I spent a lot of time building microprocessor simulators in grad school. I think it's doable.
I'd like to believe that superior [solo] programmers will out in the end
I think you're wrong (sorry!) because it's impossible to talk about superior programmers without talking about teams. Building complex systems is a team sport. There's no way around this. But you can't have good teams without good programmers.
The phrase "scaling complexity" has at least two axes built into it: the abstraction axis -- how to get better at telling the program to the computer -- and the collaboration axis -- how to get better at telling the program to each other. Most of this thread has been about whether we suck at the former. But I say we really suck at the latter, and the reason is that we haven't fully assimilated what software is yet. Software doesn't live in the code, it lives in the minds of the people who make it. The code is just a (lossy) written representation.
We can argue about how much more productive the best individual working solo with the best tool can be- but there's no way that that model will scale arbitrarily, no matter how good the individual/tool pairing. At some point the single machine (the solo genius) hits a wall and you have to go to distributed systems (teams). One thing we know from programming is that when you shift to distributed systems, everything changes. I think that's true on the human level as well. (Just to be redundant, by "distribution" here I don't mean distributed teams, I mean knowledge of the program being distributed across multiple brains.)
Maybe you wouldn't have trouble working in teams if we'd actually figured out how to make great teams. So far, it's entirely hit and miss. But I think anyone who's had the good fortune to experience the spontaneous occurrence of a great team knows what a beautiful and powerful thing it is. Most of us who've had that experience go through the rest of our careers craving it again. Indeed, it has converted many a solo type into an ardent collaborator. Like me.
I was originally going to write about this and then decided not to go there, but you forced my hand. :) Just as long as it's clear that when I say "team" I mean nothing like how software organizations are formally built nowadays. It's not about being in an org chart. It's about being in a band.
The phrase "scaling complexity" has at least two axes built into it: the abstraction axis -- how to get better at telling the program to the computer -- and the collaboration axis -- how to get better at telling the program to each other. Most of this thread has been about whether we suck at the former. But I say we really suck at the latter, and the reason is that we haven't fully assimilated what software is yet. Software doesn't live in the code, it lives in the minds of the people who make it. The code is just a (lossy) written representation.
Ah, you're right. I was conflating the two axes.
I'd like to be part of a 'band'. I've had few opportunities, but I've caught the occasional glimpse of how good things can be.
Since that whole aspect is outside my ken, I focus on expression. Hopefully there's no destructive interference. I would argue that what you call abstraction is about 'communication with each other' more than anything (even though it breaks the awesome symmetry of your paragraph above :)
To me the big question is how we're going to scale up complexity. The million-line programs we have today are already an absurdity. What are we going to do, have billion-line programs? Anyone who can figure out to provide equal value with 100x less code (edit: that grows, say, logarithmically in size rather than superlinearly) is going to have an edge. Brute force won't work forever. Plus it gets extremely expensive as one approaches the limits.
Which brings us back full circle to the Mythical Man Month. While I've argued here for a crossover point where more programmers = more productivity, I acknowledge that there is a similar crossover point going the other way, where more programmers = less productivity. Finding the sweet spot in-between is the art of organization, and not yet a science.
Have you read the mythical man month? More less skilled programers do not make things go more quickly. That idea is so wrong on so many levels that I dnt know where to begin.
MMM said adding more programmer to already late project doesn't make it finished faster. It didn't say having more programmer who already understand the project will make the project finish late. If that is true then one programmer must always be the optimal number to complete any project, which is clearly wrong.
Also, "less skilled programmers" are not always "incompetent programmers" so more of the former who already understands the system may be able to complete the project faster than fewer "more skilled programmers"
Yes I have. Do you remember why he said more programmers is a problem? Because it increases the amount of communication. Because of this, few higher skilled programmers get more done than more less skilled programmers, largely because less communication is required.
I've witnessed this directly many times over the years.
MMM does indeed say that too many programmers do indeed make a project late regardless of how far behind that project already is. Nine women can't have a baby in a month and all. Have you really read it? The book makes this quite clear. The Wikipedia page doesn't.
Perhaps I'm missing something here. Argument 1 seems to basically be that Java code is bigger than Lisp code. Argument 2 is that big code bases tend to use all of a language's features.
Why couldn't you solve the second problem with the first? As you've already pointed out, Lisp code is smaller. Thus, you don't have to worry about the "large code base" problem like you would with java.
A couple lines of particularly dense Perl can make life pretty terrible, e.g. compared to many thousands of lines of getter-setter Java. You can pack more language features into a smaller area with a more expressive language. It follows that a smaller area might hold more complexity and likely more opportunities for errors, and therefore bugs.
At the very least, we can say "the LOC-bug relationship doesn't necessarily hold across languages". It's a useful rule-of-thumb, not a universal law.
That is only the case with actual complexity required by the problem, not incidental complexity caused by the limitations of the language (such as is common with Java).
Isn't his point rather that Lisp goes beyond the normal definition of 'programming language'?
"Lisp is almost an interpreter builder rather than just an interpreter/compiler"
So most languages have a fixed grammar, and programs are always written in that grammar. Java functions, whatever problems they have, are at least always declared using a fixed grammar -- something like
function ::= returnType Identifier parameterList;
So even if you don't know what a function does, at least you (and tools) can easily see functions in the source code. The same applies to loops, classes, conditionals, etc. (Same for python, BASIC, whatever.) What Lisp does goes beyond that; I can define a new conditional language construct, or looping grammar, or object system...
"This means that one can make it be just about anything they want. Thus, its culture is almost a lack of culture."
The fear is that you'll end up having to join a project using "Dave's Object System" or "Nora's Conditionals Library". And that's where Lisp has the Aw(esome|ful) power to write in wildly divergent styles -- in fact, written in whole new grammars.
I suppose there's an idea that code can be written in a 'natural' way for the language. So you have this idea in python programming of something being 'pythonic' -- not just corresponding to the the python grammar, but also in a particular, respected programming idiom. So the question is, when you have the power to write your own grammar, can you then still write in a good, predicatable style that makes the code a good form of communication for other programmers? Is there a 'lispish' way to write code? (I suppose the same question applies to things like internal Ruby DSLs.)
Honest questions; although I've given Lisp a good shot, I've never written anything longer than a few hundred lines...
Yes. If you read enough Clojure, Scheme and Common Lisp code you'll see that they have their own different styles. On a few occasions I read Common Lisp code written by schemers, it was somewhat ugly and inelegant, although the same code would have been elegant if written in scheme. Here's an example from an Erik Naggum article( http://www.xach.com/naggum/articles/3195982690560792@naggum.... ). First the Scheme in common lisp:
Well, it’s true that Lisp has meta-syntactic programming and Java doesn’t, however since Java doesn’t have homoiconicity, you end up with people building things where a lot of the meta-programming is baked into XML files.
So, you could argue that a large, complex Java program can’t be understood just by reading it a line at a time, you have to figure out how it is all wired together via the XML files.
I think what I’m trying to say is that large and complex systems are, well, large and complex whether in Lisp or java. But Lisp has standard libraries just like Java, e.g. CLOS. So it’s true Dave could write his own objects, and Nora her own conditionals, but I wonder whether that is more folklore than practical reality on commercial projects.
There's some definite trolling down at the bottom half of that page: "Huh? I've seen no objective evidence that BrainFsck is more challenging for business applications and systems software programming than is Lisp. I invite you to provide clear evidence that it is more challenging."
Leaving that aside though, other people smarter and more experienced with Lisp than I have suggested that this "problem" is not unique to Lisp, and may not be a much of a real problem. I really think it's just a question of having good documentation and sensible style which explains what you need to know to use the magic even if it doesn't explain what's underneath. It's true that a lot of "lone hackers" aren't going to be writing good (or any!) documentation, but that's not a Lisp problem.
Ruby's metamagic is a common example. I already know Ruby and recently I've been learning Rails. I know there's a lot of magic going on behind the scenes, and occasionally I'll read some example or something and know that underneath there's magic that I do not entirely understand. I can continue studying the framework as a user and probably even get away using it for a while without completely understanding the magic underneath. I doubt that every person who has used Rails commercially completely understood every part of the framework that they use, since that would be a huge drag on their ability to ship their thing.
Another interesting example: This past week I talked to someone at a nearly pure Scala company and he described this DSL that they wrote (or just use? I can't remember) to interface with SQL databases. It exploited the fact that infix operators are really just methods and that you can define implicit conversions on existing library types that allow you to sort of extend them (I don't actually know Scala, so this may be slightly off base). The snippet of code he showed me, while it looked like Scala, would be turned into an SQL clause. Programmers who use this don't necessarily need to understand all of the gritty details so much as they need to understand what the designer intends and understand the semantics of SQL and of the DSL.
In Common Lisp, "lambda" is a macro. I don't really know how it works, but I know that it expands into lower level code defining a function and a closure. It's the same way with standard macros (in some Lisps) like let, letrec, cond, if, and, or etc... Most Lisp programmers who are not experts can use them understanding that they are macros but maybe not knowing their complete implementation because they are very well defined (and because they are pretty simple).
More complicated macros like "with-open-file" can be used by programmers who probably have some idea of how the macro works, but not a complete understanding. As long as your own macros and Lispy magic are documented sufficiently and designed sensibly programmers should be able to use code in your "world" without understanding it completely. At least to start.
N.B. I haven't used Lisp in a commercial setting, so it's entirely possible all this is bunk and I'm just being naive. Wouldn't be the first time.
edit Upon re-reading this monster comment I realize it might come across that I'm advocating some kind of "cargo cult" or copy-and-paste style of ignorant programming. I'm really not. My point is that while understanding your tools is important, so is knowing when you don't need to peel back the curtain and instead need to address the problem at hand.
I think they are poking fun at the often-misused technicality that "all Turing-complete languages are equivalent", which is obviously beside the point.
I like how most languages now a days are trending back towards LISP-like functional languages by supporting constructs like closures, lambda expressions, etc. It's all about modularity and making the programmer most productive, which LISP seems to do quite well. I guess John McCarthy was on to something.
Forth can be a pretty good way to go about implementing a Lisp, actually. I have a book[1] that uses Forth to build a set of list manipulation facilities inspired by Lisp and then in turn use those to construct a Prolog-like DSL for expressing expert systems. I've actually been tinkering with a pair system in my own Forth dialect that could be interesting if you've ever wondered what recursive list operations would look like in a concatenative language.[2]
The paper describes a completely false dichotomy between powerful and simple. In the context of an extremely simple and powerful language like Scheme or other Lisps, the argument is almost meaningless.
Because it is being used by a generation of programmers who cut their teeth on Rails and were frustrated by J2EE verbosity, popular Clojure code tends to be written such that APIs are quite readable and bereft of too much cleverness (the cleverness is usually hidden in the implementation of the API, rather than the interface).
Examples: Compujre, Ring, ClojureQL, Encanter, etc.
S-Expressions don't befuddle people. People befuddle people.