Hacker News new | past | comments | ask | show | jobs | submit login
History of Lisp Parentheses (2019) (github.com/shaunlebron)
80 points by sph on Feb 1, 2023 | hide | past | favorite | 74 comments



> A more conventional notation was planned but never happened:

This is evidence that S-expressions are objectively superior to other serializations. They are objectively simpler (this is why they were chosen in the first place -- writing a parser for sexprs is vastly easier than writing a parser for just about anything else), and the fact that humans who become accustomed to them early on generally prefer not to move away from them is evidence that other representations offer no advantages, and considerable disadvantages in the form of added complexity for no objective benefit. In other words, the main reason people don't like S-expressions is that they are indoctrinated from an early age (like grade school) into objectively inferior serializations, which trace their roots literally into antiquity.

Lisp should be taught in elementary school. Children should be taught to think not in terms of of "two times three" and "two plus three" but rather "the sum of two and three" and "the product of two and three". It would make the world a better place in the long run.


One of the things that make me jealous of programmers who use Lisps in their day to day, is how powerful code navigation and editing becomes.

Programming is, as everyone here knows, a thinking discipline first...

But when you do formulate a thought, you want the typing, navigation and editing to be as fluent, fast and expressive as possible.

You want to extract and move sub expressions without friction. You want to jump around the code and “zoom” in and out of contexts. And of course evaluate every sub expression in isolation.

All of this stuff is just much harder or limited without something like S-expressions.


And when you want to write programs that move or translate subexpressions, it is so much easier when you aren't converting to and from a stream of bytes or tokens on the fly as well.


"objectively simpler" is just false. You might prefer it, but that doesn't mean it is simpler.

The reason people mostly don't like it, in fact, is because they find it more complex. A simple infix expression - say (2 + 3 - 1) / (5 - 2 + 4) becomes the comparatively more complex (/ (- (+ 2 3) 1) (+ (- 5 2) 4))) - with multiple layers of nesting to parse, operators a large distance away from what they are operating on etc. There's a reason it isn't taught in primary school, and it isn't objectively superior, just a trade off some people prefer because of other benefits they think it brings.


Not going to comment on the “objectively” part of this discussion. But having come to S-exps late in life, I would say they definitely are “simpler” over the whole problem domain of programming computers. Programming has many examples of syntax and constructs that definitely are simpler for grade school level programs, but which make your life a living hell of complexity if you rely on them for building real-world software.


I think you're describing the difference between being "simple" (objective, absolute) and being "easy" (subjective, relative).

Of course, I'm paraphrasing this classic: https://www.youtube.com/watch?v=SxdOUGdseq4


> "objectively simpler" is just false. You might prefer it, but that doesn't mean it is simpler.

I meant in terms of the complexity of the code needed to parse them. In that respect they are clearly objectively simpler. The entire field of parser theory exists entirely because of non-sexpr syntaxes. Parsing sexprs is an elementary exercise.


Infix works for binary operators at the cost of requiring operator precedence. Hence the parens you wrote.

Prefix works for variadic operators like the +, - and / in your example, without operator precedence rules to get wrong. And if one can't reliably remember how those rules differ between languages so people wrap everything in parens anyway, what exactly did you gain from rejecting prefix?


When I compare this √(3x3+4x4) expression mixing infix or prefix operators with specific precedences and orders of evaluation to this one (√ (+ (x 3 3) (x 4 4))) coming with a unique and systematic syntax (operator values), which is nothing but a short notation for « take the square root of the sum of the product of 3 and 3 and the product of 4 and 4 » and which is easy to evaluate from inside out, in 3 steps (√ (+ 9 16)), then (√ 25) and finally in 5, when I compare them I understand why so many peaple don't like maths. You believe that 1+2x3+4 is easy to understand but it's not, too many things are behind the cap, beginning with precedence. There is nothing under the cap with (+ 1 (x 2 3) 4), when you know that (operator values) says replace values by what is returned by the operator and when you know that evaluation is always done from inside out. Let alone "special expressions" - like lambdas - but it's another thing.


> When I compare this √(3x3+4x4) expression mixing infix or prefix operators with specific precedences and orders of evaluation

Ignoring the latter part and only responding to the first: Root is infix, the left side is just implied 2 here.


I definitely find the former much easier to visually parse than the latter. I liberally unneeded parenthesis to emphasize parts of an operation which might depend too much on operator precedence, but humans are not great at keeping track of many nested parentheses.

These kinds of expressions tend to end with a cluster of parentheses, and it becomes hard to identify the beginning and end of each one, and making sure they're being closed at the right scope. Before someone comments "the editor will keep track of them for you", I want to answer that needing a tool to keep track of nested parenthesis might be a sign that such notation needs improvement.


The more 'objectively' is used here, the less meaning it seems to have. If people have trouble using the notation, it's a subjective issue that cuts into the utility of the language. Even after getting used to it myself, I sometimes get tied up translating infix formulas to RPN or PN. There doesn't need to be one notation/language to handle everything.


The point is why people have more trouble parsing (+ 2 3) than they have parsing (2 + 3).

I agree with you (contra GP) that path dependency is a fine reason to prefer infix. Continuing with what we're used to is very often the preferable choice where the costs of switching are high and the benefits unclear.

I actually think that infix notation has some advantages, especially for associative operations {e.g. if (G, +) is a group and a, b, c \in G, it's much more natural to write a + b + c than to choose between the equivalent (+ (+ a b) c) and (+ a (+ b c))}.

But undeniably S-expressions are superior from a teaching point of view as they don't hide the "true" structure.

Even though I don't dislike infix at all, it's hard to argue the reason most people prefer it isn't that they're just used to it.


> I actually think that infix notation has some advantages, especially for associative operations {e.g. if (G, +) is a group and a, b, c \in G, it's much more natural to write a + b + c than to choose between the equivalent (+ (+ a b) c) and (+ a (+ b c))}.

> But undeniably S-expressions are superior from a teaching point of view as they don't hide the "true" structure.

That's exactly what they do though. A group G is not the same as a particular choice of presentation, and an element of g is not a particular tree of generators. Linear transformations aren't matrices, numbers aren't decimal expansions, polygons aren't lines on a chalkboard - and internalizing this is extremely important for beginning math students.


> That's exactly what they do though.

> and internalizing this is extremely important for beginning math students.

I'm not saying that you're wrong, because, well, you aren't. But does the choice of notation help here? Understanding that a matrix isn't a linear transformation but only a representation of it is something that's inherently hard and requires a certain level of mathematical maturity. At that point in a sense notation doesn't really matter any longer.


Then don't choose, write (+ a b c).


Yeah, but what's the difference then?


Agree with all except that I'd prefer square brackets because it doesn't require using the shift key.


That's why there were special keyboards -> see the unshifted parentheses:

https://webwit.nl/input/lynk/compare1.jpg


Having to press the shift key all the time while programming lisp was annoying. I've remapped them.


While I agree that RPN should be taught in schools, ease of parsing is not a reason to force them on everyone. Similarly, just because people get accustomed to lisp does not mean it's a good language. I say this as a lisp "enthusiast" using it for pretty much everything I can.

Whatever language we teach in schools should be simple and quite natural to get the hang of, and should be somewhat "industry standard" if they choose to continue with said language.

Python is a great candidate because of its reliance on indents and lack of delimeters. While "we" consider lisp's parenthesis incredibly useful, new programs find it scary. Forcing it on people does not make it any less scary.

For me lisp wouldn't be enjoyable without good tooling. Parinfer, paredit, emacs, slynk, etc. make lisp enjoyable to use and edit. Python has the benefit of - you can quickly graphically install it - pip is incredibly easy to use, sometimes to its detriment. - you can edit python sanely in any text editor (even notepad).

Perhaps a lesson in introductory lesson to hy in their compsci python classes would be nice though


When a Lisp is your first or one of your first languages, it’s apparently much less of a problem.

Racket is probably one of the best languages to start with. Incredibly beginner friendly and complete, uniform tooling.


In my opinion schools should first teach a language with goto (Basic, C), so students understand how the machine works. Basic is perfect for children.

Once that is done, it makes more sense to move on to a functional language rather than an eclectic language like Python, which is mostly a collection of features that somewhat work in practice for system administration or scientific one-page scripts.

There is no deeper understanding of CS to be gained from learning Python. To the contrary, it will ruin future programmers.


> To the contrary, it will ruin future programmers.

Dijkstra once said the same of Basic!


And he was right. Look at how many BASIC programmers can't handle S-expressions.


> Python is a great candidate because of its reliance on indents and lack of delimeters.

Really? Lack of delimiters? Python relies much more on delimiters than Lisp! Python has all kinds of delimiters for subscripting, comparison, slicing, destructuring, etc. Each one of this adds a lot of syntactic overhead while teaching Python to beginners. Whether indentation makes it easy or more difficult to learn Python is debatable. Indentation definitely makes it easy to read but beginners are learning to write code.

> Python is a great candidate because of its reliance on indents and lack of delimeters. While "we" consider lisp's parenthesis incredibly useful, new programs find it scary. Forcing it on people does not make it any less scary.

Can you point to some actual data or references instead of making one unfounded claim after another? Do beginners who have not been exposed to any programming language yet really find parentheses scary? Where can I see the data?

I have first hand experience in teaching BASIC, Python and Lisp (actually, Scheme) to students as their first programming language. Believe it or not, in my experience they found BASIC the easiest to learn, then Lisp and Python was the hardest of the three.

> you can edit python sanely in any text editor (even notepad).

Spoken like an experienced programmer! For someone who is learning to program a computer for the first time, the indentation rules of Python happen to the very first thing that trips up the students. Did I indent the previous line with spaces? Or tabs? Or worse, with a mix of spaces and tabs? Notepad won't tell me and Python will refuse to run the program if I guess it wrong! Lisp has its own share of problems too. Like I said beginners find BASIC the easiest to learn but that does not make it the best language.

But my experience teaching students is just that -- my experience! I am not going to go ahead and claim that my experience is a general trend. That is why I think it is important to have proper data for things like this. Might do a lot of good for future generation of programmers and teachers. Sharing personal anecdotes is fine as long as it is disclosed that they are anecdotes. But taking your anecdotes or even worse -- your beliefs from your own biases formed as an experienced programmer -- and claiming that they form some sort of general trend is disingenuous.


> This is evidence that S-expressions are objectively superior to other serializations.

Do you have links to the evidence? Not trying to be snarky. Genuine question. I wish to see more research and data in this space.


McCarthy himself:

https://en.wikipedia.org/wiki/M-expression

The project of defining M-expressions precisely and compiling them or at least translating them into S-expressions was neither finalized nor explicitly abandoned. It just receded into the indefinite future, and a new generation of programmers appeared who preferred internal notation to any FORTRAN-like or ALGOL-like notation that could be devised.


But why (+ 1 2 3) and not +(1 2 3) since the head is special? Or (1 2 3 +) so nested trees are built bottom up?


If it's +(1 2 3) then the language is no longer homoiconic and you will lose the primary leverage of being a Lisp. The fact that Lisp code consists entirely of Lisp data literals (code as data) is what enables Lisp macros and the superior REPL experience.


Can you elaborate what about +(1 2 3) makes it not homoiconic? Isn't it trivial for a hypothetical Lisp implementation to treat +(1 2 3) as the list [+, 1, 2, 3]?

Not sure if I am missing something. This is an honest question.


First of all, that place is where the reader macros go, so the spot is already taken.

Second of all, notwithstanding interference with reader macros, then no, it isn't completely trivial unless what you're suggesting is just pointless string substitution, which would also make e.g. 1(2 3) be equal to (1 2 3).

If what you're suggesting is having an enforced hard split between lists and function/macro invocation, that pretty much just makes everything harder to do with no perceivable gain, other than looking a bit more like C-based languages. It's a bit like enforcing that any strings that are parsed as numbers somewhere in the source code must be defined using "..." while strings that aren't must use '...' instead.

The entire point is that you can manipulate code as data structures. You can rewrite the rules of the language by returning data structures. There's no reason to make this more complex.


Homoiconicity in Lisp means that it is written on top of the data syntax.

Sure you can put other parsers in front, which might just convert a different notation into Lisp data. The difference is then, that the primitive datastructure itself has a different syntax. In Lisp we use the syntax of the primitive data notation (-> s-expressions) to encode both data AND code.


Try replacing the + with a more complicated expression. ((if t + -) 1 2 3) and (if t + -) (1 2 3). In your notation it is much harder to determine which expressions are intended to be evaluated.


(+ 1 2 3) vs +(1 2 3) is just a small syntactic difference and has no bearing on whether the language is homo-iconic or not. The translation from code to data is equally trivial for both syntaxes.


sure it is

  (paris berlin london)  
is a data structure, which knows nothing about Lisp. It's just a nested list.

  paris(berlin london)
does not make any sense as a list of symbols. Why is paris written outside of the list?

Lisp chooses to write programs as nested lists of symbols.

If you start to define a specific syntax which violates the simple syntax for nested lists, then the code is no longer read by reading a simple data expression.


I think you're wrong if (+ 2 3) means apply the + function with the 2 3 arguments then (Paris Berlin London) means apply the Paris function with the Berlin London arguments..

AFAIK if you want a list either you use (list a b c) or you use '(a b c) which I find less regular than +(1 2) or (1 2) which is equal to list(1 2)


What it means is entirely contextual.

Often, it will be interpreted to be a function call or a macro invocation in the eval step (the E in REPL).

You can sidestep this behaviour using the ' reader macro (i.e. quote) as you mentioned yourself. Quoting disables this behaviour for the form that the reader macro applies to. Reader macros occur in the R step (i.e. before the E in REPL).

It is a list data structure. There are many examples of lists which are never interpreted as function calls in Lisp, since macros run before evaluation and can change the default behaviour, e.g. in

    (defun increment (x) (+ x 1))
the (x) is not a function invocation, it's a list of named function parameters.


Remember: the specific feature of a typical Lisp is that it is written as s-expressions. S-expressions are a notation for data (similar to XML, JSON, ...). This notation is independently useful for all kinds of purposes - like a list of city names: (paris berlin london).

But the specific feature of Lisp is this: code is data and data can be code. Some use the word homocionic -> the program is written in its own data syntax. S-expressions are the data syntax used by Lisp and the programs are written as s-expressions.

> (+ 2 3) means apply the + function with the 2 3 arguments

Depending on the context (a b c) can have different meanings. As a list it is a list of three items. As a top-level item it is a function call. In (let (a b c) ...) it means a list of variables.

Lisp is based on s-expressions. S-expressions is a simple syntax for data:

an symbolic expression is:

  a symbol: foo, bar, 1, 2 paris, +, -, delta, bear, G1345

  an empty list: NIL or ()

  a non-empty list: ( s-expressionn1 ... s-expression-n)
That's basically it.

Lisp has three functions to work with those: READ, PRINT and EVAL. READ reads a textual s-expression to data, PRINT takes data and prints it as an s-expression, EVAL takes data and evaluates it.

Thus the toplevel of Lisp is something like

  (loop (print (eval (read))))
It reads a Lisp expression in form of an s-expression, evaluates it and prints the result.

> AFAIK if you want a list either you use (list a b c) or you use '(a b c) which I find less regular than +(1 2) or (1 2) which is equal to list(1 2)

(a b c) is a list. It's an s-expression. It is valid data. It's only valid Lisp, if A is an operator.

(quote (a b c)) is also a nested list AND a valid Lisp program.

But READ reads all kinds of nested lists, not just Lisp programs. It does that, because S-expressions are a general data syntax.

Compare that with most other programming languages. Their programs are not written on top of a general data-structure notation like nested lists, nested vectors, nested tables, ...


There are languages where this can be written +/ 1 2 3

e.g. APL


So that the function to be applied could itself come from executing code.

  ((if (condition) ‘+ ‘*) 2 3)
If for some (toy, in this case) reason you wanted to sometimes add and sometimes multiply.


Lisp syntax is not required for this. There is no reason that you could not write:

    (if (condition) '+ '*) (2 3)
and have that mean a conditional function call on the arguments 2 and 3. In fact, many traditional-syntax languages allow you to do that, e.g.:

    Python 3.8.9 (default, Mar 30 2022, 13:51:16) 
    >>> def f(x): return x+1
    ... 
    >>> def g(x): return x+2
    ... 
    >>> (f if 1 else g)(4)
    5
No, the real reason you want (f x y) rather than f(x y) is so you can parse it with no lookahead.

It is actually possible to extend Lisp's syntax to allow f(x y) using the hack that if there is no whitespace between a symbol and an open-paren then treat that like a function call, i.e. you can tweak the Lisp parser so that:

    (list f(x y) f (x y))
will read as

    (list (f x y) f (x y))
and so kinda sorta do the Right Thing if you like traditional function call syntax. But that is pretty hacky and fragile.


Even with lookahead, it seems like there are ambiguous cases. I think the "hacky and fragile" level is so high, that it rises to a level of "practically required" to avoid ambiguity. (In the sense of "if John McCarthy had proposed it the other way in an early draft, it would have been peer reviewed out before the first implementation.")

In a sequence of expressions, how would I [or the computer] know how to evaluate the following

  g (a b) h (c d)
If g (a b) returns a function f and h (c d) returns a value v, should the result of that evaluation be v or f(v)?

Does our intuition change if I write it as:

  g (a b) 
  h (c d)
If the default is for the result to be v, and I do want it to be f(v), it's not obvious to me how to re-write it, whereas it's obvious if the first element in a list is a function call:

  (g (a b))  ; ==> f
  (h (c d))  ; ==> v
vs

  ((g (a b)) (h (c d)))   ; ==> (f (v))


Because parsing +(1 2 3) requires lookahead. You want to be able to type X into a REPL and get back the value of X, but you can't do that if there is a possibility that an open paren after the X will change its semantics into a function call.

Also, what should be the result of:

    (defvar f ...)
    (defun f (x) ...)
    (length (list f (g x) f(g x)))

?


How to express try catch block in Lisp then ? In all programming language, composability was broken at exception handling.

No, instead of teaching 2 + 2 = sum (2, 2), i will teach the GOTO statement.


A try-catch block?

  (handler-case (/ 1 0)
     (division-by-zero (condition)
       (format nil "We caught ~S." condition))
     ((or file-error parse-error type-error) (condition)
      (format nil "We should never catch ~S" condition))
     (:no-error (result)
       (declare (ignore result))
       "We should never get here."))
That's an example of how it can be done in CL, in Scheme you've got guard.

  (guard (con
          ((warning? con)
           "We should never get this.")
          ((assertion-violation? con)
           "We should get this."))
    (/ 1 0))


Common Lisp has a bunch of that stuff. For example HANDLER-CASE

http://www.lispworks.com/documentation/HyperSpec/Body/m_hand...


I never liked the ] in Interlisp (so never used it myself) nor alternative syntax systems like LOOP (also avoided it).

I don’t understand the obsession some people have with the parentheses. They basically fade into the background and when you do look at them it’s because they are informative. They are no more cognitively burdensome than the space bar.


Exactly. Totally baffled by the supposed popular reaction to parens. (However I use IDEs that match and color parens so maybe then didn't have such in the olden days referenced here.)

Not having S-expressions is weird. Not being able to drop my cursor anywhere and with a few keystrokes to expand my selection "grab" a referentially transparent value expression and just relocate it or move it or evaluate in the REPL. What are all these other PLs and programmers even doing?


A lot of languages pointlessly make a distinction between a statement and an expression. I can’t imagine why anyone would want that.


It's one of the things I love the most in Rust, and was previously just seen in functional languages.

Even scripting languages like Python feel clunkier than Rust due to the unnecessary statement/expression distinction.


I like Racket's way of handling them, to a degree. Each of {}, [], and () are interchangeable, and the use of [] is mostly down to convention. For example, it is often used in let bindings:

    (define x
      (let ([a 1]
            [b 2])
          (+ a b)))
By to a degree, I would actually prefer if the convention was enforced.


The convention is enforced in Clojure. [] for bindings, {} for maps.


It is not a convention in Clojure. (), [], {}, #{} represent different data structures.


It's really just syntactic sugar, so I would agree with the statement that the convention is being enforced.


You get different data structures. I.e. printing #{ 1 1 2 2 } prints #{ 1 2 } whereas [ 1 1 2 2 ] prints [ 1 1 2 2].


Not exactly, it uses square brackets for vectors sometimes and curly brackets for maps sometimes, yet other times it uses square brackets for maps:

  (let [a 1 b 2] (+ a b))
Instead of:

  (let {a 1 b 2} (+ a b))
(I think my map notation may be wrong, I always just bounce off Clojure so I'm bound to make mistakes).


I quite like how fennel handles it as well, {} for standard lua tables and [] for arrays, but [] stands to define variables in a let binding. Of course this comes from Lua's quirks, but its a nice, sane way to handle it.


This is an old feature of some Scheme implementations. There are also books where the Scheme code is written this way, IIRC.


I think we just have to accept that some people like s-exprs and some people don't. What people have against parentheses is beyond me, but I also can't explain why I find languages without them so ugly. Either way, the discussion is pointless if you assume LISPers tolerate the parentheses but would prefer something "better", or if you assume everyone is only opposed to the parentheses because of FUD. People literally just have different tastes!


For me it's the one-way-ness of it, can't find a nice work to describe it.

Most programs in languages will go outward and then inward, lisps just go outward. I quite like the "look" of python as well, except that its missing the colorful parenthesis and the expressiveness that makes lisp so powerful


I dunno, my Lisp code comes back in pretty frequently, but I'm defining new functions, using CLOS, making use of let* or other nifty macros (uiop:nest is occasionally useful), and refactoring when a function gets too large. But compared to horrors I've frequently seen in JavaLand (especially in older code when the more modern practice of extract method/function refactoring was less common, more laborious, and not really automated), tons of Lisp code I come across from others has been pretty sane as far as indentation and spacing goes.

It seems like a problem mostly handled by style: just because you can easily represent the whole expanded AST in one function definition, and only use simple cons/list/array/map data structures, doesn't necessarily mean you should. And sure there are plenty of (in my opinion) not so great examples out there, like it's rather unfortunate that this is the first example for the cl-sdl2 project: https://github.com/lispgames/cl-sdl2/blob/main/examples/basi... (Though on another metric, the longest line is only 96 characters; most of my lines happen to be under 80 but I don't hold myself to that limit, my screen and editor windows support wide lines by default, like >200 characters wide.) The huge nest of with-macros further impedes runtime redefinition and modification, because now sure you can change code and redefine the function, but your game loop isn't going to see any of that on further iterations. At least the other renderer.lisp example is moving in a better direction splitting the game loop up into several draw functions, and you see those have a normal pattern of going out and then back in.

Urbit docs for its language Hoon once called the phenomenon something like "attacking the right margin". I recalled this fosdem paper describing it more https://archive.fosdem.org/2018/schedule/event/urbit/attachm... especially in 6.3.2:

> There are two common syntactic problems in functional languages: closing terminator piles (eg, right parens in Lisp) and indentation creep. A complex function will have a deep AST; if every child node in that AST is indented past its parent, any interesting code tends to creep toward the right margin.

> To solve terminator piles, there are two forms of every Hoon twig: “tall” and “flat”, ie, multiline and single-line. Tall twigs can contain flat twigs, but not vice versa, mimicking the look of “statements” and “expressions” in an imperative language. Flat form is enclosed by parentheses and separated by a single space; tall form is separated by multiple spaces or a newline, and (in most cases) not enclosed at all. ...

> Right-margin creep is prevented by backstep indentation; where a classical language might write

    ?:  test
      then
      else
> Hoon writes

    ?:  test
      then
    else
> Ideally the most complex twig is the else; if not, there’s an alternate form, ?., which reverses q and r.

Hoon code is rather interesting to look at (random file: https://github.com/urbit/urbit/blob/master/pkg/arvo/lib/aqua...) especially with its rune vs keyword syntax choice, but I don't think it's worth the effort.


Interestingly, the author mentioned both M-expressions ("never implemented") and Mathematica (with a number of interesting links). Mathematics expressions are quite close to https://en.wikipedia.org/wiki/M-expression . Function application is not (f x y) or f[x; y] but f[x, y]. In contrasts, lists in Mathematica are {1,2,3} and thus pretty close to [1;2;3]. This curly bracket list notation is already syntactic sugar, as Lists are actually List[1,2,3] and thus very transparent to how you can represent them in LISP. Mathematica introduces a number of handy abbreviations such as f@x for f[x] or {x,y,z}/.f == Map[f,{x,y,z}] == {f[x],f[y],f[z]}. It even "overloads" the double semicolon symbol ;; basically acting as as in/pre/post-fix "operator", cf. https://reference.wolfram.com/language/ref/Span.html -- the same is true with virtually any symbol such as "==" (Equals) or "+" (Plus).

The Mathematica/Wolfram language could really be a game changer if it was open source.


> M-expressions ("never implemented")

There were a bunch of approaches and even implemented languages.

https://github.com/shaunlebron/history-of-lisp-parens/blob/m...


It already is:

https://mathics.org/


Even F[1,2,3] is sugar for and sexp. If you treat it like (F 1 2 3) it will work. You can take the head, or "quote" it with Hold. Or you can forget that an imagine that the language indexes from 1.


Editor support is missing adjust-parens[1], which I've been using for about a decade.

1: https://elpa.gnu.org/packages/adjust-parens.html


Do you use this as an alternative to paredit?


Yes. Lisp code is only a small fraction of the code I edit, and trying to get my fingers to switch between paredit and not-paredit mode is not something I've been successful at. Adjust-parens lets me use line-oriented editing commands to structurally manipulate the lisp code, so it's much easier to context switch between lisp and not-lisp code.


Scribble (Racket documentation language) uses a very interesting @-syntax that is very similar to m-expressions. '@func[x y]{some text}\ -> '(func x y "some text")'.


Which is basically the syntax of Scribe, an early markup language from CMU's Brian Reid.

https://en.wikipedia.org/wiki/Scribe_(markup_language)

There is a user's manual from 1978: http://bitsavers.informatik.uni-stuttgart.de/pdf/cmu/scribe/...


"Elegant weapons for a more civilized age"

https://xkcd.com/297/


> Although LISP is one of the most powerful tools available ... the format of the input language (S-expression) is awkward to use -- so awkward that many errors in programming LISP stem directly from the fact that S-expressions are the only allowable form of input. The inherent difficulty of producing correctly working S-expressions is tacitly recognized by anyone who uses M-expressions in place of them, when not communicating directly with the computer.

Extremely accurate and written in 1964.


Editors have come a long way in those 60 years.

Having written some Racket and Scheme, I notice no increase in the amount of syntax errors compared to other languages. Other languages have all sorts of syntactical gotchas because of all the syntax features they have.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: