Hacker News new | past | comments | ask | show | jobs | submit login
You already use Lisp syntax (blog.danieljanus.pl)
110 points by nathell on May 21, 2014 | hide | past | favorite | 109 comments



Sure, you can write bash like that. Actually, if you shift the parenthesis a bit, you can write almost any language like that:

    (add 4 (mul 2 5))  vs.
    add(4, mul(2, 5))
But that doesn't mean you should write bash like that. In fact, it is (IMO) very bad style to nest $(). When people complain about Lisp syntax, it's only partially about the RPN or the parens, its about the incredible amount of nesting.

Funnily enough, of all things bash offers an alternative paradigm. Instead of nesting, you use filters:

    # pseudocode
    echo 2 | mul 5 | add 4
For example, you can write:

    nerrors=$(mycommand | grep error | sort | uniq | wc -l)
instead of:

    # pseudocode
    nerrors=wc("-l", uniq(sort(grep("error", mycommand))))
I find this paradigm of "do A, then do B, then do C" much easier to think about than the nested "do C on the result of doing B on A". I wish more programming languages would embrace that kind of coding, which is basically functional, but in imperative or data-flow order. You have aspects of that in C# (LINQ), SQL, and some others. I know you can achive something like that in Haskell though Monads and `do`, though it feels a bit bolted on as a concession to having to do imperative stuff in a purely functional language. (And as nice Haskell is, it will probably never have the use base that bash has...)


"I know you can achive something like that in Haskell though Monads and `do`"

The closer equivalent to UNIX pipelining in Haskell is actually the simple "." operator, function composition. That pipeline would generally look more like

    countLines = wc LineCount

    countLines . uniq . sort . grep "error" $ mycommand
It's not a direct translation, because that is exactly neither legal Haskell (I'm munging program vs. some sort of shell here), nor exactly how you'd really do that in Haskell, however, pipelines of several functions composed together is idiomatic and frequently-used.

"though it feels a bit bolted on as a concession to having to do imperative stuff in a purely functional language"

It really isn't. IO is only one particular monadic thing; check out Parsec for some more fun. It only looks bolted on if you look very superficially, and you darned near have to bring the preconception in the first place to even briefly come to that conclusion. That idea basically comes from the misconception that "Haskell can't do side effects", then "Oh, well, this way of doing side effects must be a bad idea or something"... the first statement is false. Honestly, I miss the way Haskell does IO in other languages far more than I miss the way other languages do IO in Haskell. (To be clear, I do have both feelings, but the former is way more common.)


For what it's worth, I often do something like this:

    (!) = flip ($)
Which then allows me to do things like

    mycommand ! grep "error" ! sort ! uniq ! wc lineCount
    
Which is nice because it visually resembles `|` (which is a reserved symbol in Haskell).

Similarly for fmap:

    (<!>) = flip (<$>)
    result <- get <!> table <!> lookup "foo"
    print result ! liftIO
Or for function composition:

    (~>) = flip (.)
    doStuff = reverse ~> head ~> negate ~> print
    doStuff [1,2,3]
That being said, lisp does seem far more suited to shell programming than Haskell, because every function is variadic. What would probably be nice (and probably has been done) is a Lisp with syntax extended to include a few symbols to allow bash commands like |, >, etc:

    $ def nth-line (n) (read a | head -(n) | tail -n 1)
    $ ssh -i .ssh/some.pem (python show_servers.py | grep 'cool server' | cut -d ' ' 
      -f 3 | nth-line 1)


"It only looks bolted on if you look very superficially, and you darned near have to bring the preconception in the first place to even briefly come to that conclusion."

...or read any of the older Haskell books and tutorials, which all seemed to start in pure expression-land and attempt to stay there as long as possible (which is good programming style, don't get me wrong; but learning is different). IO and other monadic things are treated like huge and scary monsters that must be left until the very last possible moment, if treated at all, because they're HUGE AND SCARY!!!!1! Aaaaahhhhhh!


"Honestly, I miss the way Haskell does IO in other languages far more than I miss the way other languages do IO in Haskell. (To be clear, I do have both feelings, but the former is way more common.)"

Well said - this is very much my experience as well.


Indeed having a chain is often nicer than a lot of nested stuff. And when your base language is an abstract syntax tree, it becomes pretty easy to write a macro to let you flatten a lot of things out (among a bunch of other neat things). If your complaint with a lisp is that there's too much nesting, you haven't seen enough lisp. Pretty much every lisp has an infix-math library somewhere, for the popular lisps there's probably at least a dozen, and they usually allow mixing in prefix-math as well because nuts to typing "a + b + c + d + e" when I can just type "(+ a b c d e)" or if I want to call functions anywhere in the middle. There's no reason you couldn't write your example using a threading macro in Clojure, assuming your functions are defined such that they take input explicitly as the last argument instead of reading a common stdin. (Or if they take it as the first argument, you use a different macro but the same syntax.) Your code just becomes

  (->> (command) (grep "error") sort uniq (wc :-l)
Does anyone think (wc :-l (uniq (sort (grep "error" (command))))) is a better option? And instead of typing that out manually, I just used the REPL.

    user=> (use 'clojure.walk)
    nil
    user=> (macroexpand-all '(->> (command) (grep "error") sort uniq (wc :-l)))
    (wc :-l (uniq (sort (grep "error" (command)))))
Even without macros you can still setup nice-looking chaining with generators and coroutines. "Too much nesting" seems like the wrong complaint.


Take a look at factor: http://factorcode.org/ Your example would be written as:

    "C:/tmp/adinst.log" utf16 file-lines [ "SEARCH" swap start ] filter members length
File is opened into a sequence of lines, then the sequence is filtered so that only lines containing "SEARCH" remain, then members removes duplicates, then length gets the number of elements left in the sequence. In the shell, "|" is the function composition operator, in haskell it's "." In Factor, any whitespace is function composition!


It's way easier than that in Haskell, you just need to do:

  x -: f = f x
and boom, a Unix style pipe.

(Courtesy of Learn You a Haskell, http://learnyouahaskell.com/a-fistful-of-monads, but using no actual monads).


You might really enjoy Elixir[0], a language that runs on the Erlang VM. They provide (what I find to be) a nice feature called the pipe, which looks like this: |> and works basically like a bash pipe (use the output of A as the input of B).

I've played around with Elixir a bit and while I don't quite have a nail to hit with that particular hammer, it has been really fun and mind-expanding (my first functional language!).

[0] http://elixir-lang.org/


You mentioned C#/LINQ, but F# has very nice looking pipeline operator, which does exactly that. Here is a blog post explaining it: http://lorgonblog.wordpress.com/2008/03/30/pipelining-in-f/


For me there were one and a half tricks to get Lisp syntax.

The first was the mental conversion to/from f(a,b,c) to (f a b c) which is just moving the opening parenthesis one token to the left and seeing both commas and whitespace as "something between the tokens that really matter".

The remaining half was starting with Clojure. It is a Lisp whose constructs are really optimized towards dumping extraneous parentheses so it is easier not to get lost in the nesting levels.


To me, right now, I think the core difference between Clojure and other Lisps is Clojure gets rid of memory management. Yes, other Lisps have automatic deallocation, but the fundamental constructor, `cons`, is a raw memory allocation. It`s `malloc`for linked rather than sequential memory.

Although Clojure still has a `cons` it has been replaced by `seq` as the fundamental abstraction. Clojure takes advantage of being fully managed code and in a sense is as much a Lisp as C# is a C [if we were inclined to think of C as a genre, else Algol?].

It seems this, combined with a formally incorporating a DSL of the kind a experienced Lisper (such as Hickey) might create for writing programs gives Clojure it's syntax: e.g. [x y] is syntactically like a quoted list and writing a macro is the sort of thing hubris drives lazy impatient Lispers to do. In special forms [x y] is like a different more complex macro, but again the sort of thing people would write for themselves. In other Lisps the ad hoc nature of such macros however makes other people's code notoriously hard to follow.


`cons`, is a raw memory allocation. It`s `malloc`for linked rather than sequential memory

Yeah, cons is like malloc, except that the object it creates is automatically memory managed / garbage collected. There is no corresponding "free" call. Actually, that means cons is not like malloc.

Clojure takes advantage of being fully managed code

Do you realize that Lisp uses automatic memory management (just like the JVM or CLR)?


Lisp's are high level languages. The JVM and CLR are virtual machines that interpret and compile bytecode. Some Lisps utilize virtual machines, Clojure being an example of one which does not use it's own, CMULisp and Racket are examples of Lisps which use their own VM, and ClozureCL and SBCL are Lisps which compile directly to native code. How Chicken Scheme fits into the whole virtual machine model...well it's just too fine an ontological line for me to call.

Why are there no Fortran machines or Algol Machines or C machines?

The `cons` cell is a raw memory structure, just one step removed from assembly and originally accessed directly in machine code with `car` and `cdr`. It's associated symbol is literally a pointer directly to a memory location. It's so close to assembly language that thirty years ago implementing it directly in hardware was considered practical.


The `cons` cell is a raw memory structure

It's associated symbol is literally a pointer directly to a memory location

A cons is not "raw memory"; it is a typed data structure, and in Common Lisp it is even considered an OOP object of the cons class. It is equivalent to having a data structure called "pair" that can hold two things.

All memory-allocated data structures ultimately have an address of some kind, so this whole discussion about pointers is utterly meaningless. For that matter, arrays, strings, and basically every known memory data structure is "raw" too. After all, they have an address.

The fact that cons/car/cdr can be implemented cheaply in hardware stems from the fixed and small size of cons. That doesn't make it somehow inferior.

By the way, the Java stuff is implemented in hardware too, does that make Java inferior? http://en.wikipedia.org/wiki/Java_processor


Clojure constructs don't allocate memory - okay...

> Although Clojure still has a `cons` it has been replaced by `seq` as the fundamental abstraction.

Common Lisp has lists and sequences as abstractions. But it does not take away the low-level operations.


Sorry for not writing more clearly. Because Clojure does not need to allocate memory, it can use `seq` as its fundamental constructor. A `seq` is anything that implements `first` and `next`, it's a higher level abstraction than `cons` because it frees up `()` and `nil` from `car` and `cdr` [or `first` and `rest`]. At a higher level the abstraction facilitates differentiating a breadth first search tree from a depth first search tree as types of data structures (though I'm not necessarily advocating the idea). Sure it could be done in a traditional Lisp but the best case sausage factory is probably less sanitary.

I'm not making a case that Clojure is better than other Lisps - getting rid of `cons` and relying on the JVM has makes typing more critical in an environment that retains a degree of type opacity typical of dynamic type systems.


> A `seq` is anything that implements `first` and `next`, it's a higher level abstraction than `cons` because it frees up `()` and `nil` from `car` and `cdr` [or `first` and `rest`].

See SERIES in Common Lisp. http://series.sourceforge.net

> At a higher level the abstraction facilitates differentiating a breadth first search tree from a depth first search tree as types of data structures

Typically used as CLOS libraries in Common Lisp.

> getting rid of `cons`

That's caused because Clojure is implemented in something else and doesn't go very deep.

Common Lisp goes deeper because it is it's own implementation language. Much of the implementations are written in Common Lisp itself (compiler, parts of the runtime, ...). On something like the MIT Lisp Machine even the garbage collector was written in Lisp.

Lisp is reflective building material + machineries + some pre-build stuff.

Clojure is more like pre-built stuff on top of some other infrastructure.


I'm not advancing a moral argument. I'm advancing an historical one. It's an amoral fact that Common Lisp was standardized more than twenty years ago - before the commercial internet, wide availability of systems suitable for distributed computing, and even before it was clear that GUI's would win the war. Moreover, the last standardization was grounded in the previous effort and that was before GNU and when dedicated hardware looked like the future. And that first standardization was the result of peace talks between tribes that had developed customs over the previous two decades (or thereabouts).

All that said, what Clojure shares with Common Lisp, is that practical programming issues come first and that theology comes last - in the end all questions of doctrine can be resolved in the mathematics of the lambda calculus.

Clojure and Common Lisp express the same attitude toward programming and programmers. Their attitude toward academic computer science is perhaps a bit different, largely for reasons of history: Lisp has one, Clojure doesn't.


You'll try a partial rewrite of the history. Common Lisp was developed over the Internet. The participating groups were all nicely networked and had lots of distributed systems. XEROX PARC and MIT pioneered the GUI. Lisp Machines were the first workstations with GUIs commercially available in 1981. Common Lisp came with its own native binding of X11 - CLX from day one.

> Moreover, the last standardization was grounded in the previous effort and that was before GNU and when dedicated hardware looked like the future.

Hmm, Common Lisp was developed to run efficiently on stock hardware from day one. Motorola 68k, Intel 86, then SPARC, POWER, MIPS, ...

> Clojure and Common Lisp express the same attitude toward programming and programmers.

They don't.

Common Lisp was developed as a standard Lisp language for application development (in domains like expert systems, cad systems, etc.) which favors giving the programmer maximum freedom and makes little assumption on what it is running.

Example: Common Lisp comes with minimal expression syntax and an extension mechanism.

Clojure was originally developed to bring a Lisp to the JVM, which integrates with Java, reuses parts of Java and favors things like functional or concurrent programming.

Example: Clojure comes with more complex expression syntax and no extension mechanism.


Re: rewriting

I stated that Common Lisp was developed before the commercial internet & args.


Let's see:

symbolics.com, Lisp Machine manufacturer, first ever .com domain. In 1985.

bbn.com, Lisp Machine developer, second .com domain in 1985.

Thinking Machines, Parallel AI Machine for Lisp, third .com domain in 1985

MCC, had hundreds of Lisp Machines, fourth .com domain in 1985

DEC, co-sponsor of CL, fifth .com domain in 1985

Xerox, Lisp Machine developer, seventh .com domain in 1986

...

I'd say all the companies who participated in the CL development from 1981-1994 were commercial entities on the Internet. All of them offered a variety of network services in their operating systems: mail, chat, terminal, X11, file sharing, remote printing, remote booting, name server, ...

http://en.wikipedia.org/wiki/List_of_the_oldest_currently_re...

All were having networked machines and networked operating systems.


Common Lisp: The Language Guy L. Steele, ISBN 093237641X Publisher: Digital Press, 1984

However, I am in no way implying that the creation of the .com TLD is the resolution of the phrase "commercial internet" in ordinary language & args.


ANSI CL standard, published in 1994.

You wrote:

> I'm not advancing a moral argument. I'm advancing an historical one. It's an amoral fact that Common Lisp was standardized more than twenty years ago

It was standardized 1994 with the ANSI CL standard. Exactly 20 years ago.

> - before the commercial internet,

Commercial ISPs started around 1989.

> wide availability of systems suitable for distributed computing,

Distributed computing was available earlier.

> and even before it was clear that GUI's would win the war.

Lisp supported GUIs much earlier. The Common Lisp standards group had even a subgroup for this topic.

Anyway it's not clear what this all means, when CL was developed. Scheme, Objective C, C, C++, Smalltalk, were all developed before Common Lisp. What follows from that?


>`cons`, is a raw memory allocation. It`s `malloc`for linked rather than sequential memory.

>Clojure takes advantage of being fully managed code

>writing a macro is the sort of thing hubris drives lazy impatient Lispers to do

The ignorance of Clojure programmers is really incredible.



Okay, that covers eudox's third point, but the first two are still valid. Your claim that Clojure does something fundamentally different from Lisp when it comes to memory allocation is misinformed and, frankly, bizarre. And I have no idea what your point about managed code is -- Lisp code runs in a Lisp environment the same way that JVM or CLR code does.


You also don't need to quote data structures other than the list in Clojure since they evaluate to themselves.

(foo '(1 2 3)) simply becomes (foo [1 2 3])


Just like Common Lisp and Scheme.

Common Lisp:

A vector:

    CL-USER 34 > #(1 2 3)
    #(1 2 3)
A string:

    CL-USER 35 > "1 2 3"
    "1 2 3"
A structure:

    CL-USER 36 > #S(FOO :A 1 :B 2 :C 3)
    #S(FOO :A 1 :B 2 :C 3)


What I appreciate from a design standpoint...i.e. aesthetically is the `defn` special form versus `defun` and `define`. In Common Lisp, `defun` takes on the general form of the lambda:

     lambda  (x)(+ x 1)
     defun f (x)(+ x 1)
Syntactically `(x)` looks like it should be evaluated, but isn't because there are special rules for special forms, and each special form can have it's own rules:

    let ((x 7)(y 3))
        (+ x y)
is a whole `nother set of syntactical rules which are even more in conflict with normal rules of evaluation.

Scheme, keeps the let and uses a syntax for `define` that mimics the function call rather than the lambda.

     define (f x) (+ x 1)
     f 1
In Clojure the use of a vector for `defn` arguments follows it's formal lambda form like Common Lisp and is very much consistent with Clojure's rules for evaluation.

    fn     [x] (+ x 1)
    defn f [x] (+ x 1)
Likewise Clojure's `let` syntax does not require wetware parsing to recognize what gets evaluated and what doesn't.

    let [x 1
         y 3]
      (+ x y)
What I am appreciating about Clojure as a language is that it prunes some historic artifacts from the Lisp family tree. Raw memory allocation with `cons` made sense when human memory could retain more facts than a computer's fast memory. This meant it was easy to think of useful programs too big to fit in memory. That's distinctly harder today where doubling fast memory is less than $300 for the vast majority of computers.

Lisp was developed as a direct improvement over assembly. Clojure was developed as a direct improvement over Java over C++ over C over assembly. It's a rethinking of what a Lisp needs (when backwards compatibility is ignored). It's an interesting design.


Lisp has a different design philosophy. Lisp prefers minial syntax for s-expressions and then Lisp syntax is defined on top of that. Learning what is evaluated or not is a part of that.

Using array syntax for arglists is just arbitrary. Clojure is inconsistent anyway.

    (defmacro and
      ([] true)
      ([x] x)
      ([x & rest]
        `(let [and# ~x]
           (if and# (and ~@rest) and#))))
Can you explain why this has parentheses: ([x] x) ? Is that meant to be evaluated?

> Likewise Clojure's `let` syntax does not require wetware parsing to recognize what gets evaluated and what doesn't.

    let [x 1
         y 3]
      (+ x y)
This is all inconsistent. A vector [x 1 y 3] is evaluated. But in LET 1 and 3 are evaluated. x and y not.

> What I am appreciating about Clojure as a language is that it prunes some historic artifacts from the Lisp family tree.

Clojure adds more artifacts than it removes.

> Lisp was developed as a direct improvement over assembly.

Lisp was developed for symbolic computation with recursive functions in the domain of list processing. It has nothing to do with assembler.

> It's a rethinking of what a Lisp needs (when backwards compatibility is ignored).

It's a rethinking what a Java programmer might want as the next step. As Lisp not so much.

> It's an interesting design.

That's true.


I've only briefly played with Clojure and it's been a very long time since I used Common Lisp a lot, but...

"Can you explain why this has parentheses: ([x] x) ?"

For the same reason "((= a 2) (setq a 3))" has parentheses in:

    (cond ((= a 1) (setq a 2))
          ((= a 2) (setq a 3))


Can you explain why "((= a 2) (setq a 3))" has parentheses in that statement? Ideally without just saying "for the same reason as <some other example>"? Because I don't see any value in having them in that example.

    (cond (= a 1) (setq a 2)
          (= a 2) (setq a 3))
Seems like it could serve exactly the same purpose?


I really did mean to have a question mark at the end of my comment. I'm not actually sure of the answer.

The initial thought would be for structure: cond takes any number of arguments, each made up of a conditional -> expression pair. Your suggestion, which is exactly my initial belief before it burned me back in the day, would make cond take an alternating list of conditions and expressions; clearly that's not nearly as beautiful.

The structural argument is wrong, though, and your suggestion doesn't work at all. For what must have seemed practical reasons at the time, the expression side is an implicit PROGN, so you can do "((= a 2) (setq a 3) (setq b (potato)))".[1]

Common Lisp has some wacky syntax. Large chunks of which are historically (hystorically?) path-dependent. Anyone who tells you otherwise risks calling up the viscous LOOP troll.

In Clojure, the defmacro thingy looks like it takes muliple expressions after the params in the body there, so it needs ()'s around the individual cases.

[1] http://www.lispworks.com/documentation/lw60/CLHS/Body/m_cond...


In clojure cond does not have those extra parenthesis - that's a LISPism. cond in clojure works like your example, except that clojure does not have setq.

The reason you have those parenthesis on defmacro is that defmacro shares syntax with defn and fn for function declaration; in turn, these allow for multiple side-effecting statements in a function. For example:

  (defn my-function [args]
    (prn "Hello, world!")    ; do something with side effects...
    args)                    ; then return a value
This works fine in the single-dispatch case, but when you're declaring a function that takes multiple arities (with a different implementation dispatched for each), there's no way to tell where each arity ends and the next begins - after all, something like [args] is just data and could easily be a valid expression. The parenthesis are then needed to disambiguate where one arity ends and the next begins:

  (defn greet
    ([] (println "Hello, there!")
        (swap! greeting-count inc)
        :greeted)
    ([name] (println "Hello, " name)
            (swap! greeting-count inc)
            :greeted))
This is a space where clojure seems a bit inconsistent - let, fn, loop, and try all allow multiple expressions, while cond and if don't (but you can get it back using the (do) special form). This however does have some logic to it - let and loop get it for free as they don't need extra brackets to disambiguate, and for fn it's very helpful to be able to just chuck additional statements (debug prints, for example) at the top or bottom without having to wrap in (do). Plus it's free in the more-common single arity case, and keeping it around for multiple-arity keeps things internally consistent. On the other hand, if and cond would require disambiguating syntax to take additional statements regardless, and so that syntax might as well be (do).


So we have a language called Clojure which groups by

   (... ([x] x) ([x x] x) ...); where there are complex evaluation rules...
and

   [x1 x2 x3], where all x are evaluated
and [v1 e1 v2 e2 ...], where only e is evaluated, in LET

and

   [v1 ... vn], where no v is evaluated, in argument lists
etc., etc.

I somehow fail to see the dramatic improvement over just using parentheses. Instead of learning building basic rules of forming and naming expressions with parentheses, I now learn a few inconsistent rules of forming and expressions with a set of delimiters - which is no problem, but it is not the huge progress you claim.


You may be misunderstanding me. I don't claim any progress at all. Only trade-offs and fuzzy areas and lesser or greater poor decisions.


I actually like the `let` syntax improvement in Clojure, although I like how Racket deals with this issue too, which is by using both () and [] (and possibly {}) as interchange-able. It looks like this then:

    (let 
        ([x 1]
         [y 3])
      (+ x y))
which is not too bad either.


Your mileage may vary. My experience with Racket's use of square brackets was different. When I first looked at Racket Code, I assumed square brackets had semantic significance (I'd seen some Lisps and Schemes and glanced at Clojure code snippets in addition to a number of non-lisp languages). I spent mental energy trying to identify why they were used one place and not another. It was only after a significant time that I realized they are merely stylistic flourishes.

As I have written Racket code, I have found that square brackets get in the way. Don't misunderstand me, they look great when the are printed, but changing the internal structure of an expression requires changing concluding logic from '))])))' to ')))]))' that I've decided it's pain than gain. I'd rather just add another close parenthesis and move on to running tests than navigate character by character.

I love Racket, don't misunderstand me. But the square brackets are a faster horse at best, not a bicycle.


> Don't misunderstand me, they look great when the are printed, but changing the internal structure of an expression requires changing concluding logic from '))])))' to ')))]))' that I've decided it's pain than gain.

But how is this different from Clojure? You can have exactly the same problem there[1]. Whenever you introduce irregularity to the syntax you're going to have these issues. Clojure does it for "readability" and so does Racket. The difference is that in Racket you can opt-out and only use normal parens, while you can't do this in Clojure - which is probably fine in their respective cases, as they target a bit different audiences I think.

[1] But you do use an editor which makes this kind of problem impossible to occur, right? Like Emacs with Paredit or DrRacket with correct settings?


It's not for readability in Clojure. [+ 1 2] and (+ 1 2) and '(+ 1 2) all evaluate to different values. The closest are [+ 1 2] and '(+ 1 2) as arguments to anything but `eval` or `seq?`. Under `eval` '(+ 1 2) is identical to (+ 1 2) under `seq?` [+ 1 2] and (+ 1 2) both evaluate to false.

As I said it's complicated.

In Racket on the other hand `))))))` is equivalent to `)))]]]` except in so far as matching goes. The brackets are just decorations.


Oh, but you're wrong I believe. In Clojure, as well as in most other lisps, this:

'(1 2 3)

constructs a value of type list, while this:

[1 2 3]

constructs a value of type vector.

They are not the same and you cannot say that one somehow becomes the other. And from what I remember Clojure does have a list data-type and it still needs to be quoted, except for empty list literal, (), which is self-evaluating.

So, in Clojure you need to quote your lists, exactly like in many other Lisps, and you have literals for other data structures, exactly like in many other Lisps ([] in Elisp, #() in Racket, CL covered by lispm). But of course Clojure is so much more readable because it has so few parentheses in comparison... to Java, I suppose.


For what it's worth, using quoted lists is not considered idiomatic in Clojure (so be forewarned it might get you voted off the island).

Also

   (let [my-list '(1 2 3)
         my-vec   [1 2 3]
     (= my-list my-vec))
returns `true`. It's one of the subtleties of Clojure and its implementation on top of Java's object and type systems, and one which can be difficult to parse out at the lower end of the learning curve...which is where I am.


But that's besides the point I tried to make, which is that pretty much ALL Lisps include more syntax than just pure s-exps. You may like one version better than other, but saying that Clojure is the only Lisp with "revised syntax" or "syntax designed for readability" is silly.

Clojure is a member of Lisp family and it's a formidable and worthy member at that. But it's not as ground-breaking or unique as some people would like to think - the family is vast and multi-generational and many things were already tried in some form or the other.

I still like Clojure's particular view on Lispiness, though.


silly

Finding a rationale for calling other people silly seems more like the point you were trying to make.

To the point of attributing statements and beliefs to me that I neither stated nor indicated I held.


Sorry if it looked like that, I didn't mean it as directed to you personally, it was more to jeremiep above and even more to the Clojure community at large, where it's easy to encounter such views.


Sorry if it looked like that looks kind of like an apology, but isn't.

The opening line of the first paragraph of your comment directly referred to the content of my post and the last word in that paragraph was silly.

Now understand I'm in my 20th year of internet use, so it's not the most uncivil thing that's ever been said to me. It's not even close to a rather mild incivility that I might have written a few years ago.

But it is uncivil and out of keeping with HN's community standards. There are plenty of other places that not only accept but tacitly encourage posts with that rhetorical flavor. But that's not HN.

https://news.ycombinator.com/item?id=7253493


   (let [my-list '(1 2 3)
         my-vec   [1 2 3]
     (= my-list my-vec))
That syntax is wrong. It lacks a closing ].


Lisp... doesn't really have syntax. S-expressions have syntax: parens, whitespace, symbols, comments. S-expressions are for data. Lisp is a subset of the sexps.

Lisp adds just one rule: a Lisp program is an sexp whose root node is a function symbol.

Naturally, since sexps nest, Lisp programs can nest.

The author demonstrates nested computation with bash's $( .. ) syntax. Lisp, by the nature of the sexps, is nested computations. I think the author almost hits the nail on the head here:

> Our two basic rules — program-name-first and $()-for-composition — allowed us to explicitly specify the order of evaluation, so there was no need to do any fancy parsing beyond what the shell already provides.

Sexps provide nesting/composition. Lisp just means program-name-first.


> Lisp adds just one rule: a Lisp program is an sexp whose root node is a function symbol.

Now you explain these three examples. All have a symbol at the start of the list.

    * (let foo bar)
    ; in: LET FOO
    ;     (LET FOO
    ;       BAR)
    ; 
    ; caught ERROR:
    ;   Malformed LET bindings: FOO.
    ; 
    ; compilation unit finished
    ;   caught 1 ERROR condition


    * (lambda a b)
    ; in: LAMBDA A
    ;     (LAMBDA A B)
    ; 
    ; caught ERROR:
    ;   The lambda expression has a missing or non-list lambda list:
    ;     (LAMBDA A B)


    * (defun foo (a) (+ a 10) (declare (fixnum a))
    )
    ; in: DEFUN FOO
    ;     (DECLARE (FIXNUM A))
    ; 
    ; caught WARNING:
    ;   There is no function named DECLARE. References to DECLARE in some contexts
    ;   (like starts of blocks) have special meaning, but here it would have to be a
    ;   function, and that shouldn't be right.
Obviously the SYNTAX for LET, LAMBDA, DEFUN, ... is more complex than what you think.

Looks like there is Lisp code where it is not sufficient to have a symbol at the start of an s-expression. There seem to be more constraints to the allowed s-expressions.

For example the syntax for LET is:

    let ({var | (var [init-form])}*) declaration* form*


Yeah you're right, it's not turtles/functions all the way down. At some point, your functions are defined in terms of special forms.

Your special forms will either be natively supported or they'll be written with a defmacro in terms of native special forms. The definitions can't be circular, of course: if defmacro was used to define if, then if cannot be used in the definition of defmacro.

So my "rule" isn't really a hard rule. Macros/forms are the exception. But it seems more intuitive for me to think in terms of functions whenever possible, then to bend the rule when macros get involved.


No, Lisp has a set of built-in special forms with their specific syntax. In many Lisps (incl. Common Lisp), there is no way to write new special forms for the user.

Macros are not for implementing special forms.

> Macros/forms are the exception.

Common Lisp has around 30 special forms and many more macros. Some with a special complex syntax, like LOOP.

Programming Lisp with only function call syntax is practically impossible. LAMBDA, DEFUN, ... are already having additional syntax.


I'm being loose with my terminology. I guess that was a mistake. Look, I don't even know what we're arguing over. Here's some words from CLtL2:

    5.1.3. Special Forms

    ...

    An implementation is free to implement as a macro any construct described
    herein as a special form. Conversely, an implementation is free to implement
    as a special form any construct described herein as a macro if an equivalent
    macro definition is also provided. The practical consequence is that the
    predicates macro-function and special-form-p may both be true of the same
    symbol. It is recommended that a program-analyzing program process a form
    that is a list whose car is a symbol as follows: 

    1)  If the program has particular knowledge about the symbol, process the
        form using special-purpose code. All of the symbols listed in table 5-1
        should fall into this category. 

    2)  Otherwise, if macro-function is true of the symbol, apply either
        macroexpand or macroexpand-1, as appropriate, to the entire form and
        then start over. 

    3)  Otherwise, assume it is a function call.


Okay, I'll explain that in detail:

> Yeah you're right, it's not turtles/functions all the way down. At some point, your functions are defined in terms of special forms.

Functions are defined in terms of special operators, built-in functions, user-level functions, built-in macros, user-level macros and data.

> Your special forms will either be natively supported or they'll be written with a defmacro in terms of native special forms.

DEFMACRO can't define special operators. It can only define macros. Common Lisp lacks the facility to define special operators.

> The definitions can't be circular, of course: if defmacro was used to define if,

IF is a built-in special operator. No macro.

> then if cannot be used in the definition of defmacro.

Generally it could. Macros can be recursive, for example.

> So my "rule" isn't really a hard rule. Macros/forms are the exception. But it seems more intuitive for me to think in terms of functions whenever possible, then to bend the rule when macros get involved.

Macros and special forms are everywhere in Lisp.


:D thank you for posting

You seem to have laid it all out exactly, your attention to detail exceeding mine. What you've been saying is exactly how it works in Common Lisp.

Again, I do appreciate you explaining the specifics here, so let me just ramble why I think you're right and I still think "Lisp has no syntax". I did not take a Compilers course, but I call em like I see em. proceed at your own boredom

I guess my real problem is that I can't write down the line between syntax in Lisp and semantics. Many things are mere consequences of particular implementations, so I'll just list 3 scenarios.

SBCL could, ostensibly, update IF to be a compiler macro (defined with, say, COND). Gasp, I know. Would that break any code? Or would it just break the ANSI standard? Speed aside, how much does it matter?

SBCL could also change IF so the true & false parts are swapped. A criminally stupid move, sure. Then it's definitely not Common Lisp anymore. They'd've changed the calling convention. Note that the lexer wouldn't change at all, just the parser (pretend there's a separate lexer/parser, even though popular Lisps are all self-hosting). This seems to me a change in semantics, though I understand why one might argue it's a change in syntax.

SBCL could also do a third stupid thing: change IF so the test-case is wrapped with square braces, like (if [> 2 3] x y). This would be a major change, of course, and it wouldn't be Common Lisp. But the reader could be rewritten to handle the new characters and all the source code could be changed. This would definitely change the syntax. Note that the lexer needs tweaking, but the parser remains exactly the same (you don't even need new tokens).

If the syntax changes, the lexer changes. If the semantics change, the parser changes. The interface between the lexer and the parser is the List. Presumably, one could write a Lisp program in & a runtime for JSON lists instead of sexp lists, but I fail to see the benefit. Sexps are nearly the Ur-language for lists, unless you aren't trying to be a subset of ASCII (e.g. binary encodings). C-style arrays are no fun:

    ----main.lisp
    {"defun", "factorial", {"n"}, 
        {"if", {">", "n", 0"},
            {"*", "n", etc},
            {etc}}}
What a terrible syntax on such a beautiful semantics.

Anyhow I've been thinking about Lisp a lot today, and now my brain is kinda tired, so I'm going to go take my dogs for a walk. Thanks for the thought-provoking discussion.. :)


> Lisp... doesn't really have syntax.

Yes, it really does. Let me finish that chain of reasoning for you:

S-expressions have syntax. Lisp is a subset of the sexps. Therefore, Lisp has syntax. QED

Yes, Lisp has very simple syntax. Yes, it's neat that eval can be implemented in a few dozen lines of Lisp. But as a practical matter Lisp is a programming language like any other. This sort of mysticism does not encourage people to learn Lisp or help the ideas behind Lisp become more mainstream.


I believe that arh68's point was that Lisp doesn't have a syntax that is something different than the syntax of S-expressions. You're actually agreeing, but you're quibbling over the way it was stated.


Lisp syntax is quite a bit more than prefix operators + postfix arguments.

* s-expressions are a syntax for data

* Lisp syntax is expressed on top of s-expressions, reusing s-expression syntax for data

* Lisp has built-in syntax for several constructs: lambda, let, block, setq, if, ...

Many programming languages have some form of prefix operators.

Btw., some Lisps allow to omit outer parentheses in a Listener (aka REPL):

    CL-USER 30 > delete-file "/Lisp/foo"
    NIL


Speaking of leaving parentheses out of lisp - you can go the whole way and have a whitespace lisp:

http://draketo.de/light/english/wisp-lisp-indentation-prepro...


Stuff like that exists for decades:

For example RLISP:

    SYMBOLIC PROCEDURE SMEMBER(U,V);
       %determines if S-expression U is a member of V at any level;
       IF U=V THEN T
        ELSE IF ATOM V THEN NIL
        ELSE SMEMBER(U,CAR V) OR SMEMBER(U,CDR V);
Which would be in Lisp:

    (defun smember (u v)
     "determines if S-expression U is a member of V at any level"
      (cond ((equal u v) t)
            ((atom v) nil)
            (t (or (smember u (car v))
                   (smember u (cdr v))))))
      
RLISP has been developed somewhere in the 60s/70s as a parentheses free notation for Lisp on top of Standard Lisp. Much of the computer algebra system REDUCE is written in RLISP.


This strikes me as a much bigger and less obvious transformation than the one suggested by the grand-parent (although interesting none-the-less). Specifically, this seems to take semantics into account whereas the suggested representation is apparently purely syntactic. For instance, the transformation from

    IF U=V THEN T
into

    (cond ((equal u v) t))
would seem to require an understanding of the IF construct and a mapping of infix operators to lisp functions at the least.

Now, maybe this syntactic transformation has been around for decades too (the fact that I've independently toyed with the same idea suggests it's not uncommon to play with), and alternative Lisp syntaxes are quite interesting in their own right, but I do feel that the whitespace-lisp suggestion has merit in its own right (but again, I'm clearly biased). Essentially, it's an alternate syntax for s-expressions, not lisp.


This is directly addressed in the article. The main point was simply that prefix notation for all things isn't that confusing. In fact, the main place it is somewhat confusing is with the math operators.


Perhaps it's meant as a joke, but "prefix-RPN" grinds my gears. Just call it "Polish Notation", instead of "prefix Reverse Polish Notation".


Look on the bright side, when HP48 calculators were more common so RPN was better known, I heard a lot of "Reverse RPN"


Some of us just said that as a joke. Like the day we finally thought about the banks telling us to enter our PIN numbers and decided to embrace the redundancy and repetition.


I once worked on a project where I was writing Lisp, C and PostScript most days - quite an interesting combination.


That sure does look like lisp. It also looks like a way I would definitely not use to do arithmetic if I could avoid it.

This is more of a cautionary tale of going too far in making programs do 'one thing' than it is an endorsement of lisp.


I prefer comparisons of XML and LISP syntax to counter people complaining about the latter, while happily applying the former to pretty much everything under the Sun (application configuration, data exchange, dependency mgmt, build mgmt, scripting...) I'm not saying LISP is good and XML is bad; but I've seen some baffled faces when pointing out the similarities.


Stopping at the syntax when talking about Lisp is pretty narrow-minded and completely misses what makes Lisp so great. It's like saying to a car driver that motorcycles are nice because they only have two wheels and thus take less parking space. Technically true but not what makes them exciting. When you describe to the coffee-drinking radio-listening commuter about taking corners and being one with the road he won't understand what the point is. But if he were to drive a bike around a circuit, oh boy! I'm probably going to fail miserably but let me attempt to explain cornering and being one with the road in Lisp.

What's so great about Lisp? : its extreme malleability in the way you write code and the data it uses.

Code and data are the exact same things in lisp. They are represented the same way : lists represented with braces around them who can be nested. It's called s-expressions and they are in the form (<operator> <operand1> <operand2> <operand3>). There is no fundamental difference between code and data they are just list represented as s-expressions:

  (list 1 (list 2 3))
  (+ 1 (+ 2 3))
Another example :

  (task 
    (time 14 00)
    (log "beginning of task")
    (job (backup-files (important-files-in (list-files))))
    (job (delete (list-files))))
Quick, what's that above, code or data? How about both?

The one nice thing about s-expressions (sexp) instead of classical curly braces is that the code written in sexp IS ITSELF the abstract syntax tree of that code.

For example, think of a function function add(a, b){return a+b;}

You could convert this function into its syntax tree like this :

  (define-function (name add) (parameters a b)
  (body (return (+ a b))))
If we simplify things a bit we end up with :

  (defun add (a b)
    (+ a b))
Lo and behold, the example above IS Lisp code. Is that code or data? How about both?

The syntax of Lisp is the same as its internal represention. That's called homoiconicity.

What's so great about that? Well, mix that with macros whose language is Lisp itself and can do anything with s-expressions and you end up wielding great powers.

A simple example. Let's go back to our task example. We can treat it as data :

  //not actual lisp code but you get the point
  var my-task = (task 
    (time 14 00)
    (log "beginning of" )
    (job (backup-files (important-files-in (list-files))))
    (job (delete (list-files))))
my-task is now a list containing sub lists representing a task. What if we did this now :

  //not actual lisp code
  (defun time (hour minutes) (print "%d:%d" hour minutes))
  (defun log (string) (print string))
  //we say that task is a function
  //that should call each list it contains as a function
  (defun task () /*execute each list it contains*/) 
  //...
Now we can just do:

  (my-task)
What do you think is going to happen? Yes, the data, the task is being executed.

Now for the last touch : add to Lisp a powerful macro system that can take any s-expression (any list) and transform it any way it see fits and you end up yielding great power. What's that language? It's Lisp itself. Code is data. Lisp code evaluting other lisp code. I am so enlightened.

Not enlightened? Didn't get anything out of my post? Then give Lisp a try! You have to dive into Lisp to be able to get it. As Eric raymond put it : "Lisp is worth learning for the profound enlightenment experience you will have when you finally get it; that experience will make you a better programmer for the rest of your days, even if you never actually use Lisp itself a lot." - ESR


For the past few months I've been studying Lisp, reading SICP, and just trying to become more familiar with the concepts. Like many people, I was put off by Lisp's seemingly arcane syntax, which now feels beautiful to me. I can't fully explain it, but for years I had been searching for a way to reduce and abstract duplication in the structure of my code. (There's a fundamental difference between this, and simply pulling something out into a function or method.) It wasn't until I came across a great explanation of homoiconicity and macros that I became "enlightened." Lisp was what I was looking for all along.

One day, I tried explaining it to my manager, and he just couldn't get it. It wasn't clicking. Until I showed him a demo of symbolic differentiation (a la SICP), at which point he said, "Holy shit," and sat there for a few minutes considering all the implications. That was one of the greatest moments we ever shared together.


For those who aren't so lucky to have an understanding manager (though you didn't say your manager then agreed to let you start writing everything in a lisp...), I just wanted to bring out the punchline of this hilarious and melancholy comment[1] from a couple months ago: "And yet: the money guys are offering money. Just swallow your pride, play 'Stairway to Heaven' at the wedding, and pretend you've never had crazy eyes when talking about homoiconicity, and the rent will be paid."

[1] Don't upvote me: https://news.ycombinator.com/item?id=7423626


Nice comment. Very reminiscent of PG's "Beating the Averages" essay:

http://www.paulgraham.com/avg.html

Paul's essay was the catalyst that got me to (re-)discover Lisp for myself. Scheme was taught in my CompSci I course a long time ago; but at the time, it was just a weird looking language that I forgot (CS II and beyond was all in C++ at my college).


I though a pipeline would be more idiomatic way of doing this in bash. But my initial attempt resulted quite unreadable mess:

    $ cat <(echo 2) <(cat <(cat <(echo 10) <(echo 4) | +) <(echo 7)| d) | x
    4
Is there any way of making that more readable? My variation of the C utility source code is here: https://gist.github.com/zokier/c55e9c9736009981b494


Let the operators take one argument?

    # 2*((10+4)/7)
    $ echo 10 | add 4 | divide-by 7 | times 2
    4


I do like that, it also makes the utility more filter-like. Here is that implemented in C: https://gist.github.com/zokier/6b43032afb248bef68b7


hey guys i think haskell is a lisp too


Why was this downvoted when it clearly shows how this article is misguided? Just because a language uses parentheses for some of its constructions doesn't mean we can call Lisp syntax.


I don't know if Rich Hickey said it first, but he said it well when he noted that programmers get really upset when the parenthesis is on the left side of the function name, instead of the right side.

DoSomething(an_arg, another_arg);

vs.

(DoSomething an_arg another_arg)

I'm not sure the second is markedly less readable than the first.


In Fish shell backticks for evaluation are replaced by parentheses. One step closer I guess.


$() = `` in bash and zsh too. Bonus that $() can be nested.


From that point of view, Tcl is even more lispy.

     set foo [somefun $a $b]


I recently discovered Tcl and it immediately struck me as a refinement of the "stringly typed" design behind the Unix shell.

I've been wondering if it would be a good idea to try building a modern fish-style [1] shell aimed primarily at interactive use and one-liner expressiveness based on a Tcl interpreter. (I tried eltclsh and it's close but not quite that out of the box.)

[1] http://fishshell.com/


Well, if you limit yourself enough, almost any language is a 'lisp'. Maybe we should stop calling anything with parentheses a 'lisp'.


The filters you specify for LDAP searches were probably inspired by Lisp syntax too:

(&(|(uid=smoyer)(uid=swm))(mail=redacted@gmail.com))

To be honest, I'm not sure why the test expressions are infix while the logical operators are prefix. My guess is that it allows a filter to be specified without whitespace.


Wouldn't it be whitespace-free with prefix test expressions too?

    (&(|(=(uid)(smoyer))(=(uid)(swm)))(=(mail)(redacted@gmail.com)))


Wow ... I hadn't pictured it with parenthesis but you're right! I still don't know why the tests were infix then ;)


L++ is a programming language that transcompiles to C++. It uses Lisp-like syntax. Macros are supported via Racket's macro system define-syntax, define-syntax-rule and defmacro.

L++ | https://bitbucket.org/ktg/l


In Scheme, SRFI 105[1] gives the ability to do math with infix operators. There's no reason you can't have your cake and eat it too!

[1]: http://srfi.schemers.org/srfi-105/srfi-105.html


By that argument Objective C is using Lisp syntax [a b:1 c:2].

Besides, if you want to persuade me to use Lisp, then starting by comparing the syntax to the shell is probably a bad idea, as others have said.



This is silly, sh/bash natively support arithmetic operations with infix notation. He writes a contrived arithmetic solver in C to support his argument.


Oh god, another barely valid non-argument about the superiority of LISP syntax.

Well, guess what. Function name, followed by arguments, is not LISP, it's Mathematics. `A x` has meant applying a linear transformation to a vector (i.e. matrix multiplication) for a very long time.

Furthermore, saying "it's like LISP but without parenthesis around it" is like saying "it's like Java but without parenthesis around arguments and commas between then" when you actually mean to say "it's just like ML".

Finally, bash has infix operators (like `|` and `;`), which make it infinitely more readable than it would be without them. Really, if we can compare bash with any other programming language, it's ML.


The purpose of this article is try to teach people how to think about Lisp syntax through analogy, not argue that it is a superior syntax.


There's more prefix notation in math than you think:

∑ᵢ xᵢ

∫ x² dx

etc. And `A x` is more likely to be written as `(A x)` than `A(x)` in your example. I don't necessarily agree with the article, though...


tomp's point is that "LISP syntax" is actually math syntax. I think you're in violent agreement...


whoa settle down, the article wasn't about the superiority of LISP syntax, it was more about how the syntax is not as weird as you might think.


The weird thing about Lisp syntax is that people try to find all kinds of justifications why not having infix syntax is ok.

The fact that arithmetic can be written in weird form in other languages too isn't very interesting.


There's really just one justification, but it really is an interesting one: It gets the logical structure of the code in line with its lexical structure.

One reason this is neat is that it facilitates language extensibility to a radical degree. LISP has its macros, FORTH has its compiling words. Languages that allow infix syntax have tried to come up with something similar, but I haven't personally used one that I thought was particularly successful.


The weird thing is that some people feel that they need to find all kinds of justifications for why not having an infix syntax is OK.

Arithmetic in Lisp-style is excellent and easy to read. Thing is that programming languages infix math is ridiculously bad compared to what I can do with paper and pencil. It's really just a really bad form of imitation and to me that irks me way more than just doing it in Lisp. Math written in PL:s is hard to read in general and I'd love to see an Emacs package that lets you show an inline picture as you mark a mathematical expression of LaTeX rendering it as 'regular' math.


AUCTeX has support for displaying inline pictures of LaTeX math expressions in Emacs. I just checked and it even says so on the homepage ;) http://www.gnu.org/software/auctex/


I know :)

I meant "take Java math code and convert into LaTeX, display inline", the first part is the missing one!


There are a lot of reasons to prefer RPN over infix syntax and why Lisp has chosen to adopt this notation. Although, I agree with the critiques of the article, it's not really saying much.


RPN is postfix notation. You add 1 and 2 using "1 2 +"

Lisp is prefix notation, which is just the opposite.


I just realized that makes Lisp "Polish Notation".

I'm sure that opens up some really horrible jokes.


Not quite. If you drop Lisp's variable-arity operators, that enables you to drop the parenthesis. If you drop the parenthesis, that gives you Polish notation.

For example:

  Lisp:  (* (+ 1 2 3) (- 4 5) )

  Polish: * + + 1 2 3 - 4 5


To me, it opened up a really horrible thought: Polish plus Hungarian.

Lisp with Hungarian names (shudder).


You're correct, I brain farted.


What's your justification for why having infix syntax is OK?


"People are used to it, from years of education."

It's a strong argument. Whether it is sufficient depends on what it is being weighed against.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: