Hacker News new | past | comments | ask | show | jobs | submit login
Not Lisp again (2009) (funcall.blogspot.com)
403 points by kamaal on Oct 26, 2018 | hide | past | favorite | 233 comments



If this presentation of Lisp and model of computation clicks with you, you owe it to yourself to read Structures and Interpretations of Computer Programs (SICP).

The Abelson from the OP's lecture is the co-author, and the presentation described exactly follows the structure of the book (including the derivative example).

It opened my eyes to a new way of thinking about coding when I first read it (and worked through the exercises) many years ago.


I've always wondered if recommending SICP would also be applicable to a person who has never programmed before.

Perhaps pragmatism aside, would you?


SICP, although an excellent book, is not a good introduction to programming.

[1] thoroughly explains why

[2,3] based on How To Design Programs [4] is a much better intro; you should read SICP after completing this

[1] https://www2.ccs.neu.edu/racket/pubs/jfp2004-fffk.pdf

[2] https://www.edx.org/course/how-code-simple-data-ubcx-htc1x

[3] https://www.edx.org/course/how-code-complex-data-ubcx-htc2x

[4] https://htdp.org/


SICP is literally a textbook for programming 101 class. It's whole purpose is to teach programming to people who might have never programmed before.


Its literally a text book written for use at MIT.

I don't think its fair to continue the fiction that introduction to programming 101 at MIT is about pitching to people who have never done programming before any more than it is to pretend that people who get into top medical or law schools haven't had multiple years being coached to get into top medical or law schools.

Humbly, its not a beginner programming book in any reality-connected universe...


It was the introduction to programming at MIT from 1980. Almost nobody born in 1962 had any programming experience whatsoever when they got to college. Even when SICP was being phased out at MIT a decade ago (heck, even today), there were many smart students without any programming experience when they get to college. Maybe even the majority, even at highly selective technical schools.

It was not and is not necessary to have extensive programming experience to get into a good engineering college in the USA. (Arguably middle schools and high schools should place more emphasis on the subject, but that’s not where we are today.) Many of the best professional programmers I know were first exposed to it in college. Some were first exposed to programming after finishing college, and taught themselves.


I think it was different in the 70s/80s.. no coaching before college. Maybe self learning, but no tutors.


One potential downside for novice programmers is that some of the examples are a bit deeper than what you find in most beginning programming books. For example (as exhibited in the blog post topping this thread), writing programs pertaining to differential calculus. Obviously, lots of non-programmers have studied calculus, but lots haven't, as well. To the extent a reader might stumble on some example programs, SICP could be challenging, even if the programming content itself starts at zero and builds up.


It’s supposed to be a rigorous and challenging introductory course for first-year MIT students 40 years ago who had no previous computer programming background.

For someone who is not as well prepared as a typical first-year MIT student, something else might be better, but it’s reasonably accessible in my opinion. I would have loved to have a course like SICP as a high school student.


The target audience for SICP seems to be basically a freshman Gerry Sussman. There's no doubt it's reasonably accessible to people with a suitable combination of background and aptitude, but the historical evidence seems to be that it's not accessible to the typical CS undergraduate: according to https://www2.ccs.neu.edu/racket/pubs/jfp2004-fffk.pdf it had a wave of adoption as a CS 101 text but enthusiasm soon soured. In my experience, even The Little Schemer, which covers much of the same material, from a similar perspective on computing, but in a much more streamlined and learning-friendly form, isn't close to being universally accessible for unsupervised learning.


My quick take is that the authors of the linked essay have a circumscribed and very industry-focused view of what an introductory CS course should teach and what kind of things a student should be expected to learn. Their primary goal seems to be preparing students for a follow-up Java course (of the early 2000s style of Java), but without dropping all of the incidental complexity of Java/C++ on the students right away. To that effect they have pared down most of the “computer science” content of a SICP type course, and focused heavily on the computer programming process, analogous to the way middle school English classes teach about writing.

(Disclaimer: I have not directly evaluated their curriculum/textbook.)

This differs markedly from my own opinion about the proper goals of a ‘computer science’ course at the undergraduate level, which is to teach timeless principles and flexible thinking without bending to fashion (especially fashion which is now a decade out of date), and to prepare students for follow-up computer science courses such as data structures / algorithms, theory of computation, programming languages, or in a more applied direction databases, networking, graphics, operating systems, numerical analysis, machine learning, and so on. A lot of these follow-up courses will be substantially mathematical and will rely heavily on analytical skills developed in in introductory CS course and in mathematics courses (not just on programming skills per se).

But if updated for 2018 the authors’ curriculum would be appropriate as a course titled “introduction to programming” or the like. I agree it sounds like an improvement vs. first courses that start students out on C++ or Java (their main comparison in the linked paper).

YMMV.


The authors are Matthias Felleisen and other core PLT Scheme/Racket guys. They're hardcore members of the academic Scheme community, absolutely not very industry-focussed guys, and the real industry-friendly contingent apparently doesn't much like HtDP either: http://www.ccs.neu.edu/home/matthias/Thoughts/colleagues.htm... . HtDP is often described as a bit of a grey slog (I've dipped into it but haven't done it or read it cover to cover yet), but the authors absolutely haven't given up on the ideal of teaching timeless principles and modes of thinking, they're just more realistic about how the material has to be paced and prepared to get a normal body of first-year undergraduates through it successfully.


”For example, writing programs pertaining to differential calculus.

That’s only a description of Newton’s method, plus an example on symbolic differentiation (2, respectively 6 pages in a book of over 600 pages, in my copy)

Both are about a tiny corner of differential calculus: differentiating ”expressions that are built up using only the operations of addition and multiplication with two arguments”. It doesn’t involve goniometric functions or integrals.

So, it is is about turning ax² + bx + c into 2ax + b. I don’t know when exactly I learnt that, but it was way before university, when I was somewhere between 13 and 16 years old. It was part of the normal curriculum.

”SICP could be challenging”

I would hope so. If it couldn’t be challenging, how are students supposed to learn from it?


That may be, but it was still incredibly dense and challenging for me when I found a discarded copy in the 3rd year of my 4-year computer science degree, when I was already quite familiar with Java/C/Python-esque imperative code and somewhat familiar with Lisp.


SICP is taught to freshmen in UC Berkeley and MIT in their "CS 101" classes. I mean, most of these classes are "translated" to Python but still it is SICP and it has Scheme parts for things you can't do in Python. As a lisp enthusiast, and having learned programming with SICP, I believe it's possible.


I’ll go further at Indiana Univeristy when I attended the cs 101 was taught sicp style. The cs 101 equivalent for non-majors was not & was taught in c++.

I entered as a non-major so did it in c++ first. It only clicked when I went back to sicp. Could just be having done the material twice but I’ve always understood most cs concepts in a functional sense more easily.


But incoming MIT freshmen are likely to have some programming experience, and they didn't cover the whole text.

Self-study SICP without any previous programming experience would be exceedingly tough.


You know how there's a thing called "precalculus" in high schools? Covers trig, logarithms, etc.?

I feel like SICP would be a perfect text for introductory CS, if there were a "pre-CS" course in high schools that was essentially equivalent to the current 100-level Discrete Mathematics courses.

Come into programming already knowing set theory, graph theory, computational complexity, etc. and SICP will make perfect sense.


There's been proven success teaching a variant of HtDP (How to Design Programs) to highschool students in Algebra classes https://www.bootstrapworld.org/index.shtml

It is described here in an excellent talk about the challenges of curriculum design https://youtu.be/5c0BvOlR5gs note the end of the talk he discusses how Calculus is secretly taught to them as well by keeping track of a video game state over time. Going from Bootstrap->SICP would be possible too.


That's a completely separate idea from what I'm suggesting, though.

You don't need to already understand programming to be able to use SICP. SICP doesn't need an intro (or to be modified), it is an intro. What you need to already understand, is math. Specifically, the kinds of math that SICP uses in its examples and problems—which aren't taught in high-school, but totally could be (100-level Discrete Maths has very few prerequisites; you could theoretically learn it in elementary school, though practically not, for mental-development reasons.)

SICP is, essentially, a book to teach programming to "mathematicians who have not necessarily ever heard of a computer." It works wonderfully when used as such. But the less mathematical grounding you have, the harder you'll struggle.

It's exactly like a textbook for one of those "Communications for Business" courses. If you're not a business student, then it'll be a bad communications textbook, because all the examples and problems are business-related.


In that talk though, he discusses the impossible task of introducing new course electives/mandatory courses to public middle or high schools, how the school board and teachers will simply drop the rigor if it's a mandatory course making it yet another useless class or how students will never end up taking it as an elective as their elective timeslots fill with remedial assistance courses.

You are right though I wish there was a course that taught set theory and complexity in HS. Anybody interested in learning both at the same time, there are intro discrete math books that also teach intro programming https://cs.wheaton.edu/~tvandrun/dmfp/ but not in the full CS101 coverage of SICP.


> (100-level Discrete Maths has very few prerequisites; you could theoretically learn it in elementary school, though practically not, for mental-development reasons.)

Typical introductory discrete math textbooks list high school precalculus as a prerequisite and recommend an introductory calculus course as very helpful additional background. You would have to teach something significantly different to someone with the background of a typical middle school / early high school student, because they are lacking quite a bit of the expected background concepts, terminology, and notation.

These subjects could certainly be taught to elementary school students, but slowly and with a lot of build-up and practice, over the course of years. The limiting factor is not primarily “mental development” (as in, some inherent change of brain wiring that inevitably happens with age) but rather substantial amounts of practice, prior exposure to related ideas, time spent digesting them, stamina/attention span/confidence at solving technical problems, etc.


I feel like SICP would be a perfect text for introductory CS, if there were a "pre-CS" course in high schools

That is more or less what this was meant to be:

https://people.eecs.berkeley.edu/~bh/ss-toc2.html


I was introduced to lisp through "A Gentle Introduction to Symbolic Computation".

Honestly, I think it is a superior intro-book for genuine beginners, compared to SICP, and a quick perusal of the two will allow most people to come to a conclusion not only about the level at which they are pitched, but which one is pedagogically superior for them in terms of both structure and writing.

https://www.cs.cmu.edu/~dst/LispBook/book.pdf


> Because Common Lisp is such a complex language, there are a few places where I have chosen to simplify things to better meet the needs of beginners. For example the 1+ and 1- functions are banished from this book because their names are very confusing.

Are they? I don't know if I remember hearing anyone complain about those before.


I haven't seen it articulated either, but I am not a fan of these either. They look like cheesy, whitespace-dependent: infix: (1+ 1) yields 2 and kind of looks like (1 + 1). What?

In TXR Lisp, which is quite CL-like in some places and deliberately so, I used the names succ, ssucc, and sssucc for adding 1, 2, 3, and 4. The inverses are pred, ppred, and pppred.

These are vaguely inspired by Hofstader's S0, SS0, SSS0 ... in Gödel, Escher, Bach as well a by caar, cddr, ... plus the dim memory of operators called succ and pred in Pascal).


It is a misconception that incoming MIT freshmen know programming. Some of them might (and a few of them will be world class programmers), but these classes are designed for people who don't know programming, at all and believe it or not great majority will be people who never coded anything. I TA'ed SICP in both MIT and Berkeley (they have a pretty similar class derived from original SICP book, some parts of it is in Python, some in Scheme, some in SQL) and most students believe these classes are for geniuses or for people who already know programming. Wrong, the whole point of SICP is to teach it, and once students try hard they see for themselves that they can learn programming.


That's true, but all of them have/had a lot more "mathematical maturity" than most programming 101 students today (who are often middle/high school students, university students who didn't do particularly well in high school mathematics, or even independent learners coming to programming from non-STEM backgrounds).

SICP is not an introduction to programming as much as it is an introduction to computer science.

I mean, one of the key lectures is about differential calculus. Do you realize that at most universities, CS 101 students are not assumed to have taken calculus. Hell, a large fraction of CS 101 students will NEVER take calculus (non-majors who will instead opt for college algebra).

But that example itself is actually more of a red herring. The point is that the mathematical maturity does provide a lot of extremely useful background thinking/problem-solving skills that most CS 101 students haven't yet developed and will be developing for the first time in CS 101.


My understanding is that SICP was developed as the programming 101 class for MIT. Do students come into programming 101 at MIT never having programmed before? I would suspect that what is a great book for programming 101 for MIT students is not necessarily the right book for those that have not yet been exposed to programming.


I've heard that actually students who had never programmed before did better with SICP than students that had. You have to un-learn a lot of habits from imperative languages when you start working with Scheme. Total novices could take the material as it was presented, but people who had programmed before often tried to shoehorn it into the C/Java/PHP paradigm they'd already experienced and ended up fighting the language.

The same effect worked in reverse as they graduated MIT or worked on projects in more mainstream languages, though. Students would have to unlearn some of the elegant recursive formulations and strong abstraction abilities of Scheme and deal with languages that actually have strong industry adoption. That was a major factor [1] in the replacement of SICP with Python in MIT's intro programming class. Python has abundant library support that more closely mimicks what the programmer will have available and what challenges she'll face in the real world.

[1] https://cemerick.com/2009/03/24/why-mit-now-uses-python-inst...


I don't disagree that SICP is a better fundamental model than other approaches. I am one of those that was slowed down by learning BASIC as my first language. I was questioning whether or not the typical student learning Programming 101 at MIT is a good template for other students coming into a class with similar goals. I would be surprised if the typical MIT freshman hasn't done some kind of programming in high school or earlier.


At the time the book was written, the freshman who had used a computer were in the minority, even for EECS majors.


My only critic about SICP is the fondness for arithmetic/analysis in problem sets. For EE it's obviously normal. For pure programming it's an unnecessary requirement/burden (unless you love maths). HtDP is better in that regard.


I basically agree with this opinion on SICP:

Many of the illustrations are based on mathematics and a lot of problems require a decent amount of mathematical maturity to tackle. I think it’s a great book for anyone decently mathematically competent but for people who don’t really know what to write when asked for a proof, I think it’s a lot more difficult.

Apart from the mathematics based examples, a lot of the way things are explained is not very related to actual machines and relies a lot on abstract symbol manipulation, something that is easy not to notice if one is used to mathematics, but is often very difficult if one is not.


> For pure programming it's an unnecessary requirement/burden (unless you love maths). HtDP is better in that regard.

I disagree strongly on that assertion. The mathematics in SICP was what made it stand out for me, and what got me into programming. Other books I read before that just presented programming as an opaque thing where you follow instructions and write incantations without any rhyme or reason.

In general terms, the math-phobia that has been prevalent in computer programming for the past 20 years is a new phenomenon (I have not seen it in literature from the 1990s or earlier, or heard it from old-school programmers and CS people) and is at its core deeply anti-intellectual. An example of this dumbing down is the current crop of JavaScript "devs." This runs all the way to the semantics of the language: one of the most idiotic decision made in JavaScript is overloading '+' to concatenate strings. Commutativity? Associativity? Algebra? Get out of here with this math nonsense.


You read me wrong, i said arithmetic/analysis. To encode, resolve a problem with a computer you need no knowledge of convergent square Root approximation (an exercise early on)

Mathematical approach is a plus to me toi.


It was specifically written as the first textbook on programming for freshmen who had never seen a computer before (this was in the early 80s when the MIT freshman who had used a computer before were the exception).

So dive on in!


Written as a text book for freshmen who were going into engineering and programming - at one of the top schools in the country.

I've only been close to the level but I believe freshman courses of those sorts often involve throwing really smart, cocky kids to a given subject in the most challenging fashion possible. (Remember Caltech used the Apostol text book for calculus, developing everything axiomatically, because, you're smart).

So it is introductory text but it might not a gentle, easy introductory text.


Depends on how smart they are. When my physicist friend works on cold fusion and judging by interacting with him, has IQ around 140 (compared to my 110-115) wanted to get into programming, I recommended him SICP, but never to to anyone else.


Working on cold fusion doesn't sound like a particularly intelligent use of one's resources.


It's not science if you only work on things you know in advanced will succeed. Just like there's no such thing as a "failed" experiment.


I believe its been re-branded LENR now - and there needs to be someone to investigate the claims.


There are also SICP video lectures available, from the 80s.


There's also some newer SICP videos online but the 1980s one is the best and still highly relevant to a programmer from todays era (or any era).

I'm self-taught and it's what made me fully "grasp" programming and the power of abstraction.


Do you have any link to the newer SICP videos


That book is a miracle of clear presentation. Worth looking at even if you never write a line of Lisp/Scheme in your life.


Lisp is a fun language to program in. I learned Lisp after already being familiar with Java / Python and some other languages, which maybe made it even more beautiful.

One of my favourite pieces of code is a Lisp REPL in Lisp:

    (defun repl()
            (loop (print (eval (read)))))


looks even better with threading in imaginary lisp dialect:

    (-> read eval print loop)


    (defmacro -> (&rest args)
      (loop for item in (reverse args)
            for result = (list item) then (list item result)
            finally (return result)))


No longer imaginary. Power of LISP right there.


Agree to disagree; to write lisp one must know how to read it.

The threading macro is great, but the GP one-liner tells me more about lisp.


Lisp is all about being able to compile the latter to the former in compile-time, if you want it. That's the major problem lisp solves.


Hard agree!

Still one is the latter and the other, the former. Hence I think the first tells more of the lisp story, with just a few more parens. ;-)

"Why is it called REPL when the code says LPER? Well..."


Why is it called "root mean square" when you square first, then take the mean, and then the root?


We can just cut straight to “why is function composition backwards?”


It's not backwards. The text is linear, but the code isn't. It's outer most nested function to the innermost nested function.

Makes most sense if you think of HTML

<html> <body> <div> </div> </body> </html>

would compose as

compose( html, body, div )


`(root (mean (square x)))`

Is this more readable?

`x ~> square . mean . root`

Where `~>` "sends" a value to a function `x ~> f := f(x)` and `.` is the "backwards" function composition `f . g := \x (g (f x))

I think I prefer the "backwards" notation ("root mean square"), but I can see the appeal of "square mean root"


I like having both notations at my disposal. For example, F#'s forward pipe operator[0], or pipes from a unix shell.

[0]https://docs.microsoft.com/en-us/dotnet/fsharp/language-refe...


I think clojure has that; I could be wrong, though, I've never used clojure.


Clojure does indeed have -> (thread-first), ->> (thread-last), some->, some->>, cond->, and as->. [0]

(->) inserts each form's value between the function name & the first argument in the next form.

(->>) inserts each form's value as the last argument in the next form.

The latter forms are much more niche and I haven't found need for them yet.

[0]: https://clojure.org/guides/threading_macros


Bucklescript/ReasonML have that too:

CLJ:(->)/BS(|.)/RML(->) called Fast Pipe.

CLJ:(->>)/BS(|>)/RML(|>) called Pipe.

plus some nifty placeholders for other positions:

https://reasonml.github.io/docs/en/fast-pipe#pipe-placeholde...


as-> is great for when you have to mix thread-first and thread-last.

cond-> is great for building maps where some keys might not be needed:

    (cond-> {:k1 v1}
      v2 (assoc :k2 v2)
      ...)
some-> and some->> I don't use as often, but when mixing in some java code that could throw an NPE, these will avoid that and just return nil early (they short-circuit when getting a nil value).


You can next ->> inside ->, took far too many years for me to realize:

    @(-> ctx
        :hypercrud.browser/fiddle
        (reactive/cursor [:fiddle/ident])
        (->> (reactive/fmap name)))
another

    (-> [10 11] 
        (conj 12) 
        (as-> xs (map - xs [3 2 1])) 
        (reverse))


It does, sort of. As another commenter pointed out, -> chains the operands at the beginning, but also ‘loop’ form requires list of bindings.

It is totally possible to write custom -> macro that works correctly as used in my example.


I've always thought (probably because I'm not that smart) that REPL really should have been coined as REPR instead.

Though I guess that wouldn't be valid lisp if you tried to express it literally.


R as in repeat? REPL makes sense if you read it as "read-eval-print loop" (loop being the noun, not another verb).


R as in REPR


Python 3 for comparison:

    while True:
        print(eval(input(">>> ")))


Not really:

   ovov@ovov ~> cat repl.py
   while True:
       print(eval(input(">>> ")))
   ovov@ovov ~> python3 repl.py
   >>> x = 1
   Traceback (most recent call last):
     File "repl.py", line 2, in <module>
       print(eval(input(">>> ")))
     File "<string>", line 1
       x = 1
      ^
   SyntaxError: invalid syntax


`input` isn't READ, it's READ-LINE. READ reads one Lisp form, even if it's more than one line, and even if there are more than one on one line.

Python's `eval` also isn't sufficient here. For example, you cannot enter that loop you wrote at the prompt it provides when you run it, even if you write it on one line. You could use `exec`, but then everything would print as `None`.

There's the larger point about forms vs strings and printing readably and such, but this doesn't even provide the same basic functionality that a Python programmer would expect from a REPL.


This is a fundamentally different function that works with strings and not symbolic data structures.


That is cool, but it's pretty similar in Ruby.

    def repl
      loop { puts( eval gets ) }
    end
I guess I just prefer Ruby for general legibility. The same type of bracket everywhere makes pairing them mentally an error-prone chore. At least for yours truly.


This is actually a very different piece of code.

puts and gets are string functions. read and print are code deserializers and serializers.

eval is a function that takes a data structure representing code (as deserialized by read) and computes its value as a data structure, and serializes it.

While you might get the same effect as a user, the mechanics and metacircularity are lost.


plus, it's a gepl, not a repl.


My previous, somewhat-cranky comment got downvoted away, but I do think Lisp's "higher-order magic" and "ability to do calculus" is being a bit exaggerated here. From below - evaluating an Nth derivative in 80s-era C is not so atrocious:

    #define DX 0.0001
    typedef double (*func)(double);

    double NthDeriv(int n, func f, double x) {
        if (n == 1) {
            return (f(x + DX) - f(x)) / DX;
        }
        else {
            return (NthDeriv(n - 1, f, x + DX) - NthDeriv(n - 1, f, x)) / DX;
        }
    }

    double Cube(double x) { return x * x * x; }

    double result = NthDeriv(3, &Cube, 5.0);
As mentioned in previous discussions of this article, the equivalent in Python is pretty elegant:

    >>> def deriv(f):
    ...   dx = 0.0001
    ...   def fp(x):
    ...     return (f(x + dx) - f(x)) / dx
    ...   return fp
    ...
    >>> cube = lambda x: x**3
    >>> deriv(cube)(2.0)
    12.000600010022566


I think the thing that impressed the author at the time was more the fact that functions, being first-class values, can be introduced at runtime in a REPL, rather than having to be "planned" at compile time. So the C code isn't really analogous, but the Python code is.

But re: the Python code—I would say that, from the perspective of the 1960s, all modern "dynamic" languages that have REPLs (like Python) are Lisps in essential character.

"Lisp", back then, referred less to "a language that uses a lot of parentheses", and more to things like:

• runtime sum-typing using implicit tagged unions;

• parameterization of functions using linked lists (or hash-maps) of paired interned-string "keys" and arbitrary product-typed values, rather than parameterization using bitflags or product-types of optional positional parameters;

• heap allocation and garbage-collection;

• a compiler accessible by the runtime;

• "symbolic linkage" of functions and global variables, such that a named function or variable "slot" can be redefined (even to a new type!) at runtime, and its call-sites will then use the new version.

We only notice the parens as the differentiating feature of Lisps nowadays, because everything else has become widely disseminated. Perl and Python and PHP and Ruby (and even Bash) are fundamentally Lisps, in all of the above ways. Lisp "won."


However, PHP, Python, Ruby, JS, etc aren’t homioconic, which would disqualify them as “lisps” in the minds of most Lispers, including its creator.


One might say that they're all implementations of https://en.wikipedia.org/wiki/M-expression s.

My point, though, was that while the proponents of Lisp define Lisp one way (by the things only Lisp can do due to e.g. homoiconicity), the opponents of Lisp (like the author was, coming into the course) define Lisp by the set of features that make Lisp "not a Real Programmer†'s programming language"—i.e. the set of things that make them not want to use it.

http://www.catb.org/jargon/html/R/Real-Programmer.html

My assertion was, from these opponents' perspectives, there are very few languages left for "Real Programmers"; most modern languages have inherited nearly all of those horribly convenient Lisp-isms. Heck—modern CPUs are so good at pointer-chasing, they may as well be Lisp Machines!


Python, Ruby, and JS/ES are all on the same side of the Smalltalk family tree, aren't they?

On another side of the Smalltalk family tree, languages like Java embraced a kind of static typing and found strong success with it, leading to the promotion of strongly-typed structures over lambdas. This side, ironically, did the most work on JITs, which are meant to compensate for dynamic language features, and languages like Dart are still marrying JIT techniques to strong static type systems today.

And on another side of the Smalltalk family tree, languages like E are fully oriented around objects with rights, so that instead of structures which can be examined, values must have messages sent to them instead. Homoiconicity, lambdas, pattern-matching lists, and other staples of Scheme are nonetheless common in E and Monte.

I think that the Smalltalk family, overall, could be viewed as a response to the barren syntactic landscape of Lisp, but I think that they could also be viewed like any other language: Taking what they like from ancestors, and not taking what they don't like.


Lisp was developed largely for Symbolic Computing. See the famous McCarthy paper. Early on it was detected that Lisp execution can be seen as Symbolic Computing itself - see the Lisp interpreter executing symbolic expressions and the famous definition of Lisp evaluation in Lisp itself. Lisp then had the first self-hosted compiler, the first integrated macro system and the first integrated structure editors in the early 60s.

That was Lisp back then. Lisp then early on was used in a bunch of symbolic computing domains: theorem provers, computer algebra, natural language processing, planning, rule-based languages, programming language implementation (languages like ML or Scheme were implemented first in Lisp).

In the Lisp sense Python does not have a REPL, since it does not implement READ, EVAl and PRINT over symbolic data, like Lisp does.


Queinnec Lisp in small pièces explicitely argues that learning eval helps you model all dynlangs.


Seeing the symbolic derivative is the amazing thing to me. Really highlights the blurring of the lines between data and code.

You are correct that the analytic derivative isn't that hard to do in any language.

Edit: Should have said "many languages" not just "any"


I put up https://news.ycombinator.com/item?id=18343652 to explore a little more of the symbolic derivative fun you can do in lisp. Would love to get feedback on that post.


The Scheme and Python versions both return the derivative function. The C version is significantly less cool. I'd want to do something like:

    func cube_3rd_deriv = NthDeriv(3, &Cube); /* returns a function */
    double result = cube_3rd_derive(5.0);


You can still do this in C, with the same implementation Lisp uses under the hood. C being C, there is a bit of syntax boilerplate. I also omitted some details which may have been relevant at the time such as ensuring no cast from function pointers to data pointers (not hard to handle, but makes the code a bit longer).

    #include<stdio.h>
    #define DX 0.0001
    typedef double (*function)(double);
    typedef double (*closure_body)(void*, double);
    struct deriv_env { function f; };
    struct closure { void* env; closure_body body; };
    #define CALL(c, x) ((c)->body((c)->env, (x)))
    double deriv_body(void *env, double x) {
        function f = (deriv_env *)env->f;
        return (f(x + DX) - f(x)) / DX;
    }
    void deriv(function f, struct closure *out) {
        out->env = (void *)f;
        out->body = deriv_body;
    }
    double cube(double x) { return x * x * x; }

    int main(void) {
        struct closure cube_deriv;
        deriv(&cube, &cube_deriv);

        printf("%f\n", CALL(&cube_deriv, 2.));
        printf("%f\n", CALL(&cube_deriv, 3.));
        printf("%f\n", CALL(&cube_deriv, 4.));

        return 0;
    }


Maybe not as theoretically cool (doesn't happen at runtime) but isn't this pretty much the same in practice?

    double Cube3rdDeriv(double x) {
        return NthDeriv(3, &Cube, x);
    }
&Cube3rdDeriv can now be passed around in a higher-order way.


Same in Haskell:

   *Main> derivative f x = (f(x+dx)-f(x))/dx where dx = 0.0001
   *Main> derivative (^3) 2
   12.000600010022566


But would you introduce a function pointer on the first day of a student's first programming class? Good luck.


In Python:

    def deriv(f): return lambda x: (f(x+dx) - f(x))/dx
or even:

    deriv = lambda f: lambda x: (f(x+dx) - f(x))/dx


That's a sort of function, yes, but it's numerical analysis. In a Lisp you can pass in the equivalent of "4x^3 + (x^2)/2 -5X + 6" and get back "12x^2 + x - 5" - a function that is the derivative of the function you put in, rather than a function that approximates the value of the derivative at a value. You'd be manipulating the function rather than its return value.


Lisp is definitely not doing symbolic algebra and finding a pure analytical answer like that under the hood - symbolic math, differentiation rules etc. are a whole other beast (see SymPy for example).

Maybe that's the misunderstanding in the original article that made it seem so mindblowing?

Evaluating (f(5.0001) - f(5.0)) / 0.0001 just gives an approximation of the derivative at a certain point in the function (aka the "finite difference method").


Symbolic derivatives are one of the very next sections in the famous lisp lectures that this was watching.


Interesting - for symbolic differentiation I imagine it's about code-manipulation rules for e.g. calculus' product rule, chain rule etc.?


Essentially, yes. And if you want it to be pretty, it gets complicated quickly.

https://mitpress.mit.edu/sites/default/files/sicp/full-text/... is the chapter. Turns out it wasn't the "very next" one. But it is in my head.


Here's symbolic differentiation done in ~20 lines of Haskell: https://www.quora.com/What-are-20-lines-of-code-that-capture...



Its a matter of time before there is a "bot" that auto-posts this kind response for historical reference.


There is a link at the top called “past” that does exactly what all these “helpful” posts do. And it’s been there for a long time.


There used to be. Not sure what happened to it.


The classics are worth revisiting on a regular basis.


I don't think the grandparent is complaining about reposts, merely linking to previous discussions for the curious.


This isn't identical, but I remember when I first learned first-class functions and partial application in Haskell about ten years ago, it was a "the world is different now" moment. I had reinvented partial-application approximately thirty billion times by using overloading or with elaborate if-elseif-else chains. When I learned Haskell, and saw that I could magically make my non-lists function automatically lift across a list, and how I could define things like this myself, I was immediately convinced that I would never go back to C again (I didn't know about function pointers).

I learned Clojure about a year ago (which was my first introduction to Lisp outside of reading SICP), and it gave me a similar feeling. I felt like Clojure was a "better Java than Java", and now its my go-to JVM language. I think McCarthy was really onto something with Lisp :).


Could you give an example of partial application with Haskell in the way you mention? I've had similar feelings with Clojure.


I was impressed first by the fact that whoever designed this particular Lisp system cared about efficiency.

This isn't a particularly unusual approach. Common Lisp, a contemporary Lisp dialect, was designed with efficiency in mind. The approach of "Lisp = slow + inefficient + spending 80% of time on garbage collection" is a myth.


This comment was regarding TCO, which Common Lisp (unlike Scheme) doesn't guarantee. Many CL compilers today offer it, but IIRC that was rare in the 1980's.


There was also a lot of consternation when TCO was formalized as part of R*RS. It's a fundamental part of how Scheme works, so it was obviously necessary. But from a compiler writer point-of-view, it was a rather exotic demand at the time.

Though we've moved from a period of languages as defined by a spec to languages defined by implementation. Such as Perl, PHP, Ruby and Python. So what is "normal" has changed a lot.


This article is set about 20 years prior to common lisp's standardization. It was a reasonable belief then.


It’s set in 1983. “Common LISP The Language” was published in 1984.

(The ANSI standard was 1994. Still nowhere near 20 years).


Well, 10 years is the same order of magnitude as 20.


Not really. If you were at MIT in 1983 then one would have probably access to Maclisp, whose compiler had seen some work by then. The first Scheme implementation was also written in Maclisp.


Isn't the derivate relatively easy in C?

    #include <stdio.h>
  
    const double delta = 1.0e-6;
    double cube(double x) { return x * x * x; }
    double deriv(double (*f)(double), double x) { return (f(x+delta) - f(x)) / delta; }

    int main()
    {
        printf("%f", deriv(&cube, 2));
        return 0;
    }
Am I missing something?


I think it's that you can calculate the value, but you can't (or can't as easily) create a new function that is the derivative of another function, and pass that around.


yeah you can return a new function, but it is less easy than in lisp, or F#

but in general a lot of that stuff in the article seems mundane today, you have to imagine seeing it in 1983


> yeah you can return a new function

How?


You could do it with JIT compilation or self-modifying code. I'm not sure if there's a simpler way in C


You can exec a C compiler and use dlopen on the generated object file, or use libtcc[1].

https://www.bellard.org/tcc/


That makes sense, thanks. And as others have mentioned it's therefore not easy to compose higher order derivatives.


In a language with higher-order procedures like Lisp or Julia, you can write code that actually returns the derivative as a function, as opposed to differentiate at a given point. This is harder to do (gracefully, anyway) in e.g. C. The beauty of the functional approach is that it corresponds much more closely to how we think about math. See for example Structure & Interpretation of Classical Mechanics (http://mitpress.mit.edu/sites/default/files/titles/content/s...). This is especially powerful when combined with automatic differentiation, as is done in the ScmUtils system that goes along with SICM. (Finite differencing has numerical stability issues when carried too far.)


With only the functions you've defined, how do you calculate the second derivative?

If the `deriv` function instead returned a function instead of a number, you could apply the `deriv` function twice.


You're missing that:

- you can't write a C macro which will do this symbolically. We really just want to be calculating 3 * x * x.

- having cube and deriv, you can't write an expression which combines these two, and is itself a function.

BTW, the & is unnecessary in &cube; a primary expression which names a function evaluates to a pointer to that function.


While you can take the address of cube, you can't do this trick more than one level deep: you can't return a new function from "deriv".


Here's the generic C++ version (https://gcc.godbolt.org/z/mMln6L):

    constexpr double delta = 1.0e-6;
    
    constexpr auto deriv = [] (auto f) { 
        return [=] (double x) { 
            return (f(x+delta) - f(x)) / delta; 
        };
    };

    constexpr auto f¨ = [] (auto f) { return deriv(deriv(f)); }; 

    double res = f¨([] (double x) { return x*x*x; })(2.0);


Here's a deriv example in Go: https://play.golang.org/p/kUM5qn7oiXg

I find that it follows the functional spec quite closely, thanks to Go functions being first class citizens.


I agree, it isn't the greatest example for showing the power of Lisp.

Here's a much better one that uses a macro to do symbolic differentiation in <150 lines:

https://github.com/aksiazek/symbolic-differentiation

Much harder to do that in C ;-)

Peter Norvig's book "Paradigms of Artificial Intelligence" uses symbolic differentiation as an example in one of the early chapters.


article is talking about 1983. your example relies heavily on Greenspun's 10th rule.


It'd look slightly different in 1983 but the functionality would all be there. const->#define and maybe you'd need to change the function signatures to K&R style, but that's it.


I was about to suggest the same thing. This is simply computational finite difference and almost follows directly from the equation https://en.wikipedia.org/wiki/Difference_quotient


I thought it would be a down with Lisp article, but it was like my experience with the language, omg, how cool. From that ancient introduction to Lisp all I learned was that Lisp was as close to perfection and beauty as I would get in a programming language.


Serious question: what is it about the "beauty" of Lisp that the HN community seems to like so much? To me, how Lisp looks is its worst quality--the number of parentheses is just mind boggling. I want to understand why it is so loved.


Presumably, this is because you've looked at Lisp code, but you probably haven't understood Lisp.

Criticizing it on the basis of parenthesis would be roughly equivalent to criticizing someone talking about the beauty of mathematics on the basis of the color of the piece of chalk they're using. It's true that it's criticism about aesthetics, but it's not criticism about the aesthetics that the mathematician was talking about.

Once you've understood in what sense people mean that Lisp is beautiful, you can disagree, but this disagreement will not concern parentheses.

As an aside: I have a similarly shallow aesthetic aversion to JS syntax for the same reason. When I encounter a 12-line ragged cascade of curly brackets, square brackets and parentheses, sprinkled with semicolons, I wonder how that can be considered OK.

Aside 2: with Lisp, you edit programs structurally, meaning that their position is essentially managed for you. This is what people refer to when they mean that "the brackets disappear after a while." You're not focusing on them; they're handled by something else.

In my editor, I have the closing brackets faded nearly into the background, which reflects how concerned I am about their existence.


>When I encounter a 12-line ragged cascade of curly brackets, square brackets and parentheses, sprinkled with semicolons, I wonder how that can be considered OK.

It's considered OK because the extra syntax carries semantic weight, and the semicolons disambiguate intent for the interpreter (because while semicolons are optional in javascript, leaving them out can lead to errors.)

And as someone just beginning to play around with (Arc) Lisp, I still can appreciate both paradigms. Neither is objectively wrong.


> It's considered OK because the extra syntax carries semantic weight, and the semicolons disambiguate intent for the interpreter (because while semicolons are optional in javascript, leaving them out can lead to errors.)

Which is a very roundabout way of saying that JavaScript has a very complicated grammar. Is it necessary? No, it isn't. People use and love languages with much simpler grammars, such as FORTH (reverse Polish notation), APL (monadic and dyadic algebraic notation), and Lisp (fully parenthesized Polish notation).


>Which is a very roundabout way of saying that JavaScript has a very complicated grammar.

If you think Javascript's grammar is "very complicated" then C++ will probably give you a heart attack.

You're overstating what amounts to an aesthetic argument, which is fine, but not really compelling.


> Presumably, this is because you've looked at Lisp code, but you probably haven't understood Lisp.

I'm not sure this is a great presumption. Someone can be familiar with lisps, understand the design choices involved, even deeply appretiate the beauty of the resulting language, while still prefering languages with a more flexible syntax

> This is what people refer to when they mean that "the brackets disappear after a while." You're not focusing on them; they're handled by something else.

This is exactly the problem. Lisps surrender the issue of syntax completely to the interpreter/compiler, which makes it easy for a machine to parse, but harder for a human. I personally prefer syntax to be designed for humans to read and write, because its going to be translated to something different to be executed anyway.

Now, the simplicity of the lisp syntax is of course intimately connected to the homoiconicity of the language and the extremely powerful macros, so I do understand the value of it and the tradeoffs involved. (Although I find it slightly ironic how it's venerated, given that the original lisp actually had two syntaxes, M-expressions for the human to manipulate and S-expressions for the machine to manipulate).

I personally prefer to do my "meta-programming" in the type-system and have flexible syntax to express functions in different ways relevant to the uses of the language. That can be Elm with it's simple type system and elegant operators for application (|> and <|) and composition (>> and <<) which indicate direction, or it can be Agda with it's unicode syntax and custom mixfix operators, where you can define a if-then-else as a function as "if_then_else_ : (b : Bool) -> (x : a) -> (y : a) -> a" and call it "if <b> then <x> else <y>". I find that the way these languages use operators, reduces the need for parenthesis and allows you to express what your code is doing, whether it's a pipeline of functions or some imperative-like steps being done in order. The result is a syntax that I find clearer.

I am however, well aware that a lot of people dislike Haskell's use of operators and that these languages are a minority in terms of usage indicating that my preference might not be common, but I don't necessarily presume that it's because they don't understand Haskell.


> which makes it easy for a machine to parse, but harder for a human.

That is a naive fallacy. What is harder for the machine is harder for the human.

By and large, humans rely on formatting cues, especially indentation, to parse programs. Human eyes can be fooled fooled by bad indentation:

  /* C */
  if (foo)
    bar();
    cleanup();

  if (outer)
    if (inner)
      foo();
  else
    bar();
If you want to compare human versus machine parsing, then you need to write all the code samples on a single line with no breaks (if the programming language allows that). All optional whitespaces that can be written as a single space should so be written. This way, the humans are actually parsing the same tokens as the machine and not relying on cues that aren't part of the language.

> I personally prefer syntax to be designed for humans to read and write

There is no such thing in existence. People who design syntax simply use their whims, rather than any cognitive science that brings in any measure of objectivity. Those whims are shaped by what those people have used before.

The concept behind Python is actually the closest to getting it right: it recognizes that people really grok indentation rather than phrase structure, and so it codifies that indentation as the phrase structure.

> Languages are a minority in terms of usage indicating that my preference might not be common

That's another fallacy. The vast majority of all programming and other computing languages that have ever been invented are not used at all, or used by a vast minority. In that vast majority, we can find the full gamut of what has been done with syntax, semantics and everything else. Popularity and lack thereof isn't trivially driven by cosmetics.


>That is a naive fallacy

No, I'm not claiming an essential difference, merly that they are optimized for different things.

>There is no such thing in existence.

Of course there is, machine code is designed for computers, higher level languges are designed for humans.

>Popularity and lack thereof isn't trivially driven by cosmetics.

I'm not talking about what drives popularity, but what drives cosmetics.


Machine and assembly languages are just another example of input that is easy to parse for human and machine.

(Machine programs are hard to understand, but in this area we don't have an easy human to machine comparison: machines generally don't understand programs. The advantage of machine language is that it doesn't have to be understood in order to be efficiently executed.)


> Lisps surrender the issue of syntax completely to the interpreter/compiler

It doesn't. It just works differently and looks different because Lisp syntax is defined on top of s-expressions. Lisp syntax is provided by built-in syntax for function calls, a bunch of special operators (let, progn, block, catch, quote, function, flet, if, ...) and a zillion macros. Each macro provides syntax - from primitive examples to complex syntax (the LOOP syntax is an example). The syntax is user-extensible by defining macros.

The level of s-expressions is a syntax for data. This syntax can be changed by readtables and readermacros.


> I'm not sure this is a great presumption. Someone can be familiar with lisps, understand the design choices involved, even deeply appretiate the beauty of the resulting language, while still prefering languages with a more flexible syntax

Yes, you can, but re-read the original comment: the implication was "how can it be beautiful with all those parentheses?" It seems clear that two different aspects of beauty were conflated.

> …what is it about the "beauty" of Lisp that the HN community seems to like so much? … the number of parentheses is just mind boggling. I want to understand why it is so loved.

Also, I would love to hear which language(s) has a more flexible syntax than Lisp, seeing as expressiveness is one of Lisp's strong points.

> Lisps surrender the issue of syntax completely to the interpreter/compiler, which makes it easy for a machine to parse, but harder for a human.

What does "surrender the issue of syntax" mean?

In any case, citation needed for this one. "Easy" and "hard" are very strongly dependent on familiarity. This human finds them easy enough to parse.

> I find that the way these languages use operators, reduces the need for parenthesis and allows you to express what your code is doing, whether it's a pipeline of functions or some imperative-like steps being done in order.

Reducing parentheses is not really a strong selling point for someone familiar with Lisp, because Lisp programmers don't really work with parentheses like in other languages (again, they're managed).

It's not that they're liked for their own sake, but they enable many advantages (such as homoiconicity, structural editing, easy, selective evaluation of expressions at any level of nesting, and so on).

That being said, in Clojure, I often prefer to pipe functions like you mentioned when it seems like the expression would end up unnecessarily nested otherwise:

  (-> 8 inc str vector)
  ;; => ["9"]


> Also, I would love to hear which language(s) has a more flexible syntax than Lisp, seeing as expressiveness is one of Lisp's strong points.

I'm thinking about ML-like languages such as Elm, Haskell and Agda (to give a spectrum of examples with varying degrees of complexity in terms of language features).

> Lisps surrender the issue of syntax completely

I think I am referring to the same thing as you are when you say that they're not used like other languages, and that they are managed.

>This human finds them easy enough to parse

Easy enough is good, but I think we can do better!

> It's not that they're liked for their own sake, but they enable many advantages.

I think this is what I was trying to get at. :)

FWIW, I like lips a lot, and consider them far better than most mainstream languages.


> the number of parentheses is just mind boggling

There aren't actually more parens in a Lisp program than a program written in a C-like syntax (which is really an algol-like syntax). They just stand out more for two reasons:

1. There is less punctuation in general, so the parens are more obvious. Instead of f(x, y, z) you write (f x y z). Without the commas, the parens stand out because that's all that is left.

2. There is only one kind of parens in Lisp whereas C-like languages use at least three: (), [], and {}, so that makes any particular kind of paren less prominent.


I believe it's purely seeing lines that end with )))))))) that leads the OP's observation ("To me, how Lisp looks is its worst quality--the number of parentheses is just mind boggling"). You don't get that in a C-like syntax. In practice you don't read each individual closing paren to understand the code so it doesn't matter. But it sure looks "scary."


> You don't get that in a C-like syntax.

No, instead you get something like );}]);) except that that's usually split up over several lines.

BTW, it's pretty easy to tweak Lisp's syntax so that your parens don't get so deeply nested. See

https://github.com/rongarret/tweetnacl/blob/master/ratchet.l...

for an example.


This isn't really true.

1. Other languages have operators which can be written without brackets, x + y vs (+ x y).

2. Precedence rules allow ex. polynomial expressions to be written without brackets, 2 * x + y vs (+ (* 2 x) y).

3. Algol-like let-bindings or monadic-do-style variable-bindings that inject bindings into the containing block (as opposed to Lisp-style let-blocks where the new scope typically corresponds to a new block) use fewer brackets and keep nesting depth smaller.

    # 2 sets of brackets
    fun foo(x, y, z) {
        let w = x + y;
        let u = w + 2*z;
        let v = w - 2*z;
        u * v
    }

    # 13 sets of brackets
    (fun foo (x y z)
        (let ((w (+ x y))
              (u (+ w (* 2 z)))
              (v (- w (* 2 z))))
            (* u v)))


It's pretty easy to embed an infix parser in Lisp if that's the only thing standing between you and happiness. e.g.:

http://www.flownet.com/gat/lisp/parcil.lisp

See:

http://www.flownet.com/gat/lisp/djbec.lisp

for some example code that uses this parser. Scroll down about half way and take a look at xpt-add and xpt-double.


It's even easier to embed a sublanguage that requires two sets of parens to write a list if that's what happens to make you happy.

    ((+ ((* 2 x)) y))
"I have this sublanguage..." is not a very interesting participant in the evidence that Lisp doesn't have lots of parens.


I can add arbitrary numbers of parens to your code too. I fail to see the point.


> There is less punctuation in general, so the parens are more obvious. Instead of f(x, y, z) you write (f x y z). Without the commas, the parens stand out because that's all that is left.

Well put, Madam/Sir.


3. Lisps optimize tail calls, leading to easier and more efficient recursion. This does tend to increase depth of code.

4. The lisp punctuation style closes all levels on a single line, where most C code guidelines close one level per line. This leads to a "thick" chunk of close parens that students of lisp find harder to read at first.


1. You write everything as a List. So there is syntax, but not exactly.

2. The data is also list, and your program is also a list.

3. Algorithm are basically List Manipulation(Stacks, Queues, Trees, Adjacency lists). So its easy to write complex algorithms in Lisp.

4. Tail call recursion. This part amplifies 3. further.

5. Functional programming features.

6. REPL. I mean like a real REPL.

7. Macros.

1 - 7 helps you to express problems and their solutions with code which represents exactly that. The boiler plate and other assisting code is largely non existent.

Another reason is if a thing has been there around for the longest, more people have thought about it. Both quality and quantity of literature of it are higher than others.


> Another reason is if a thing has been there around for the longest, more people have thought about it. Both quality and quantity of literature of it are higher than others.

Being older doesn’t necessarily mean either quantity or quality of literature will be greater. MUMPS has been around for much longer than Go, for example.


I think because it is a powerfull language - and once you learn to look past the parentheses it actually has some pretty cool features.

For example, it is a homoiconic language, so any piece of data can be evaluated as if it is code.

The only way to really understand it is probably to learn it :)


I would like to make the controversial claim that homoiconicity counts against adoption; the single representation carries little structural information and the meaning of a symbol is highly dependent on its containing context. Whereas the ALGOL-derived languages use different types of bracket or other means to indicate visually what the semantics are.

Ironically modern Javascript often replicates the bracket pileup, just with "})" instead. Python does away with it by having "nonindented newline" as an invisible semantic character that closes any number of scopes.


> the meaning of a symbol is highly dependent on its containing context

That being a problem is pretty much solved by not writing 1000 line top-level expressions with thirty nesting levels.

People who actually write Lisp are engaging their imagination for the program itself. Thus their imagination is too busy to come up with scary reasons how things could go wrong that would spook them out of continuing.


Hmm. I've been thinking for a while now that Lisp (and also FP) matches how some peoples' minds work, and doesn't match how other peoples' minds work. For those who match Lisp and/or FP, it's like a revelation, and it's very freeing. For others, not so much - it's a new way of programming, and you can do it that way, but why would you want to?

You said:

> ... the single representation carries little structural information and the meaning of a symbol is highly dependent on its containing context.

That makes me wonder if the difference is abstract vs. concrete thinking - those who by nature prefer abstract thinking will find Lisp more natural, and those who prefer concrete thinking will find it clumsy and unsettling.

Choosing the "right" programming language is not just finding the right language for the task (though it is that). It's also finding the right language that fits our minds - and our minds are not identical.


I think it can be learned - but one needs to have an open mind. If one has learned how a specific program look like in something like PASCAL and then later learns Lisp, there are a bunch of concepts which need to be adjusted...

I started with BASIC, various Assembler variants, PASCAL, UCSD PASCAL, MODULA 2, ... and then learned Lisp variants and Scheme - and also learned basics of some other languages like ObjectPascal, SAIL, Prolog, Postscript, Smalltalk, ...


I'll also add that its powerful and easily deployable macro facilities due to its homoiconicity hurts adoption as well. Macros and dsls can make for increased productivity /decreased loc for individuals and small teams, but the same features can be not so good for large teams and communities, as each program may tack on more and more macros to keep in mind before fully understanding the code being read. I find it easier to read through even a lengthy file of c code using the same old familiar primitives, ymmv.


That's not how code reading works. There are at least two levels of code reading: understanding WHAT the program does and understanding HOW it works.

Most of the time a Lisp programmer wants to read WHAT the program does and that in a very descriptive notation.

Any Lisp has a lot of macros. The base Common Lisp language is full of macros. Any defining operator - functions, macros, variables, classes, structures, methods, ... - is already a macro.

For example a structure - a record - is defined like this:

  (defstruct ship
    (x-position 0.0 :type short-float)
    (y-position 0.0 :type short-float)
    (x-velocity 0.0 :type short-float)
    (y-velocity 0.0 :type short-float)
    (mass *default-ship-mass* :type short-float :read-only t))
This is using the macro DEFSTRUCT.

It's easy to see that it defines a structure type called SHIP with 5 slots. Each slot has a default value and named options.

A programmer will NEVER need to see what the expansion looks like. The code the macro generates is twenty times larger than the source code. What the programmer actually needs is a documentation of what effects the macro has: defining a type, defining accessors for the slot, defining a type constraint for the slots, making one slot read only, ... This is better read from the documentation of this macro operator, instead of trying to see it from reading low-level operator code implementing them.

Every programmer will be happy to read this macro form - no one wants to see the expanded code, how structures are actually defined in terms of low-level operators.

Thus MACROS increase the readability of programs a lot - independent of the team size. Really no one would want to define a structure type by manually creating all the definitions for it (a type, an allocation function, slot accessors, type predicate, compile time effects, ...).

What they can make more difficult is some maintenance tasks - where bugs appear on a meta-level where programs transform code.


One word: loop :-)

(It's a bizarre, mini language for looping constructions and terrifying animals and small children.)


  (loop for i from 10 upto 20
        do (print i))
what does it do? Maybe it prints the numbers from 10 upto 20?

    (loop for element across vector
          sum element)
Hmm, what does it do? Maybe it sums all the elements of a vector?

Ada:

  for E of The_List loop
     if Is_Prime(E.P) then
        E.Q := E.Q + X;
     end if;
  end loop;
Lisp:

  (loop for e in the-list
        if (primep (p e))
          do (incf (e q) x))
Totally weird and bizarre how it looks similar.

Even stranger:

  (
  loop for e in the-list
       if (primep (p e))
          do (incf (e q) x)
       ; end if
  ; end loop
  )


"The Anatomy of a Loop", Olin Shivers. http://www.ccs.neu.edu/home/shivers/papers/loop.pdf

And that's in Scheme, so it doesn't look like someone dropped a chunk of Algol in your coffee.


Which explicitly mentions loop systems for Lisp as its inspiration: Yale Loop and Jonathan Amsterdam"s excellent ITERATE.


I use simple loop constructs like the above all the time; it's especially useful for collecting. But elaborate loop constructs in their full glory are impossible to understand and IMHO should be banned from use on any software engineering team.


Seen pretty elaborate loop constructs in large Common Lisp code bases I was working on, and my conclusion is that the only thing that should be banned is lack of willingness to spend 30 minutes at some point learning the loop syntax. Seriously, after you write few loops on your own e.g. collecting over several hash tables in parallel, you won't have much problem anymore.

I still feel current generation of programmers has a learning phobia.


Yeah, LOOP is super un-lispy. Dick waters is a great guy but I never liked LOOP and never use it.


What does it have to do with him?

The basic idea comes actually from Interlisp and Warren Teitelman‘s ‚Conversational Lisp‘ and its FOR macro.

The main purpose of LOOP is to question ones assumption what is Lispy and what not. ;-)


Pretty sure Dick wrote LOOP. There was an ai working paper on the subject by him iirc too. Teitelman had left mit by the time I got there, though I later got to deal with DWIM (which seems to have infected web browsers and npm etc :-( ) at PARC. one great titlemanism was the addition of ] to Interlisp.


Are you sure you are not mixing this up with the Series system, where Richard Waters wrote a bunch of papers on?

The paper from 1980 on 'LOOP Iteration Macro' by Burke&Moon does not mention him at all.

http://www.dtic.mil/dtic/tr/fulltext/u2/a087372.pdf


Ha, that's a funny scan! I have a paper copy of that memo someplace.

Yeah, I might be misremembering.


The beauty of lisp-like languages isn't in the visual appearance of the code as text (the parenthesis and layout), but in the structure of the code as trees, the uniformity, and the homoiconicity.


There's also a historical componenent in all of this. Modern languages have adopted many features from the old Lisps that were quite unique back then. The same is true with the ML language family: between the two, ML and Lisp seeded a new generation of languages that are, on average, much more powerful than the languages in common use in the 1970s and 80s. (Smalltalk deserves a special mention, but it came later.)

I'm still fond of Lisp, but languages like Python, D, Nim (and increasingly, Rust) offer a lot of the Lisp affordances that I really valued -- in particular, easy compile-time metaprogramming. The biggest missing bits are Lisp's deeply integrated REPL-driven programming, and the use of program images -- which have many drawbacks but also some benefits.

IMO, everyone should write and maintain a medium-sized program in a modern Lisp (SBCL, CCL) at least once, just to get an appreciation for how different the programming / debugging experience is. So many well-integrated tools at your disposal, even at runtime.


- Different from what you are used to does not mean worse.

- Structural simplicity. Instead of having a huge mess of syntax to keep track of, it has only a very small number of constructs. Simplicity is beautiful.


I guess "ycombinator" (a LISP idiom/macro) as subdomain, and the fact that the site code was originally written in LISP, attracts lots of LISPers. I'm personally not a big fan of LISP, much less of the frequent hijacking of threads to degenerate into discussions about sexprs supremacy and other holier-than-though topics.


Code is data. So you can write a program to write parts of your program, on the fly


To be fair you can do that in most modern languages.

(I don't know lisp) but sometimes this approach just makes the code really difficult to understand afterwards. Does Lisp have something that makes it easier or better?


You can, but it's like picking your nose with boxing gloves on[1] - it's really clumsy. In Lisp, it's natural and easy. It's maybe the same in result, but it's really different in ease of use.

[1] Credit where due - I stole that phrase from my friend Michael Pavlinch.


One could argue that it's made "natural and easy" by making Lisp itself much more difficult to write. So, in other languages, the macro authors pay the tax, but their users do not. In Lisp, everyone pays the tax. This is, perhaps, why writing macros is far more common in Lisp than in other languages with them - if you pay the tax either way, you might as well fully utilize what it buys you. Whereas in another language, you weigh the benefit that a macro will give you, against the tax you will pay if and only if you write a macro.


Most "other" languages don't have Lisp-like macros, so I'm not sure what you are talking about here.

I don't think Lisp is much more difficult to like, on the contrary, out of all the languages I know well (C, C++, Python, Java, Common Lisp) not only do I find CL _by far_ the easiest to write but also that it puts me in a state of flow (= unparalleled mental clarity, focus and productivity) which doesn't easily happen with the others.

Credentials: I spent close to 10 years writing C++ at Google.

I certainly think that Lisp is very different to most popular programming languages today and that difference is immediately obvious. This makes it very easy for people who do not like leaving their comfort zone to dismiss Lisp simply because it "feels" too strange to what they're already familiar with.


You really just have to dive into it. This image is of Scheme, but the sentiment holds for Common Lisp too: https://www.thejach.com/imgs/lisp_parens.png


Once you start using it you don't really see the parentheses, you simply see the atoms and structure.

OTOH distinctions like expressions vs statements don't apply -- my years as a lisp developer were the most productive of my life.


If the worst thing you can say about a programming language is that you don't like it's syntax, then it is a pretty damn good language.

Lisp (or Scheme, in this case) is a very simple language that gets essentially everything right.


It's not the only thing people complain about ; merely the first one. And I don't think anyone can seriously claim that Common Lisp is a simple language. Powerful pragmatic languages never are.

As far as syntax goes, the proof is in the pudding. All languages have people gripe about some part of their syntax or another, but it's clear from experience that "OMG parentheses" is exceptionally prevalent. Hand-waving it away as something that people just don't get because of lack of prior exposure is not really sound - somehow other languages don't get similar complaints (at least, not as universally, and not to the same magnitude) with first-time users. Besides, why is there a lack of exposure? Why, because everything else is different. But why is it different? Isn't the obvious conclusion that Algol syntax family is vastly more prevalent for the simple reason that people prefer it, and simple homoiconicity is not sufficiently enticing?

All arguments in favor of Lisp syntax feel like they ultimately boil down to "you're holding it wrong". And that may well be so - but if so many people are finding it so awkward to hold, isn't that prima facie evidence of ergonomic deficiency? I don't claim to understand why Algol-style is easier. Maybe the way our visual processing works is just better with more varied punctuation? That's something for psychologists and brain scientists and maybe linguists to figure out. But in the meantime, we could at least acknowledge the way things are. I really like Lisp as a collection of ideas (not just the usual ones like HOF and macros, but also stuff like e.g. symbol-based namespacing, or the sheer flexibility of CLOS). But lispers have to ask themselves why, instead of Lisp seeing wider adoption, other languages - that came literally decades later, so "upstart" would be a very polite way to describe them in this context! - become vastly more successful than Lisp by appropriating its cherry-picked features.


The complainers are non-users; it's just a meme, like picking on Cobol.

So many computing languages have come and gone over the years; the number that have been created vastly outnumber those that have ever been popular, let alone that are now popular.

Lisp is amazingly vibrant as a family. People are still excited about it and there is development work going on. That's amazing for something with such old roots.

I added two instructions to the virtual machine this morning, and used them in the compiler of a Lisp dialect to eke out a little performance gain. Wo hoo!

The haters can all go stuff it.


> The complainers are non-users

Of course they are not. If you put on a shoe, and it's uncomfortable, you just go get a better shoe that is. This does not negate the validity of the question: why was the first shoe uncomfortable?


Lisp always catered to people with a certain state of mind. It was never a language positioned for popular appeal, and by that I mean the masses of 9-5 "brogrammers" we have today. Looking at how many people fall in love (or not) with SICP, today, and the reasons they give validates this line of reasoning. Lisp and SICP are meant for inquisitive thinkers and hackers who are willing to go DEEP. If you can superficially dismiss Lisp (and all the geniuses that worked with and improved it over the years) in the manner that you do, then certainly, Lisp is clearly not for you. You are not an artist. You are most definitely not a hacker.

You seem to think that popularity should be the prime consideration when it comes to programming language design. This is what gave us PHP and Javascript. I dare say that the people that use Lisp today (and there are plenty of those) do so because nothing else will be as good to them. They love the language. I've known people who moved jobs and got less money in order to work with Lisp. What does that say about the language?


If we go with this analogy honestly, we have to recognize that the vast majority of the programming world is hobbling around in prescribed footwear, and a good lot of it has bits of gravel and broken glass.


> Besides, why is there a lack of exposure? Why, because everything else is different. But why is it different? Isn't the obvious conclusion that Algol syntax family is vastly more prevalent for the simple reason that people prefer it, and simple homoiconicity is not sufficiently enticing?

That is just confusing cause and effect.

> All arguments in favor of Lisp syntax feel like they ultimately boil down to "you're holding it wrong". And that may well be so - but if so many people are finding it so awkward to hold, isn't that prima facie evidence of ergonomic deficiency?

You are confusing cause and effect again, this time by implying that there is something fixed about the way people learn languages (Chomsky's "language organ"). Learning is not like the shape of your hand.

Once you learn structured editing, Lisp code can be written and changed in much fewer keystrokes than code expressed in a more complicated grammar, which is actually ergonomic.


> That is just confusing cause and effect.

And what is the evidence to that claim?

I'll grant you that I haven't given any evidence for my claim, either. But, again, the onus is on those claiming that syntax doesn't matter to prove so, against overwhelming practical evidence (showcased by user count) that it does.


>Lisp (or Scheme, in this case) is a very simple language that gets essentially everything right.

I have to wonder if you actually know all that much about Scheme. Undelimited continuations, for example, are pretty much strictly broken. I like Scheme, but to claim it's flawless is kinda ridiculous.


The biggest benefit is that you can keep a very clear model of everything that's going on in the language (well, most things), from parsing to evaluation to application. The parenthesize represent very important things about how the language works and the regularity of the language emphasizes that it's a few basic principles replicated again and again. Also, the "beautiful" lisp variants tend to be in the scheme family. Common lisp is wart and pragmatic by comparison.


There is no difference between Scheme and Common Lisp when it comes to parentheses and beauty. Everything you mention (simplicity, regularity, few basic principles) apply just the same to Common Lisp. The differences between the two are mostly standard library related (irrelevant to matters of beauty) and lisp-1 vs lisp-2 (no clear winner in terms of beauty I would say).


In many ways it is the parsimony that is beautiful. There is a ton of syntax in lisp, but there are not as many punctuation requirements. And you get to play with the grammar and many parts of syntax yourself.

Contrast this with most other languages, where there are many things the language does that you can not do. At least, not easily.


From my short study, I would guess the uniformity and simplicity of it the syntax and the ability to extend it (thanks in part to the uniformity and simplicity of its syntax).


Lisp is basically writing a program as abstract syntax tree, with facilities to modify the abstract syntax tree.


When I was in high school I had access to three programming tools: Commodore BASIC, 6502 assembly, and Turbo Pascal. I spent years hand coding 6502 assembly because BASIC was clearly for amateurs and I could only get at the Pascal system at school.

At one point I wrote my own Pascal interpreter for the C64. In BASIC, because I wasn't good enough to do it in assembly. It was a little slow.

I desperately wish someone had introduced me to lisp. The entire course of my life would have been different. The most frustrating thing is that I probably could have written a naive lisp interpreter in assembly.


I got a Forth for my C64, and it was the PERFECT language for the machine: fast, compact, structured, extensible. Generally much easier to understand the code than C64 Basic, but you could also easily call into assembly language if you needed a bit more speed.


I’m too young (or studied in the wrong time or place) so my CS teachers failed to impress me to such an extent.

However, I very well remember reading the Compilerbau book by N. Wirth and the awe I felt. It’s just a few pages but it was a revelation.


I'm young too (21), I was taught programming in scheme. In UC Berkeley the very first CS class (called "SICP" lol) is in Python and Scheme. https://cs61a.org/


Can you share the name of the book? Is it "Compiler Construction" by Niklaus Wirth?


Check also his other books: Algorithms and Data Structures, Programming in Oberon, Project Oberon, etc.

They are all available on the ETH website and his homepage (in various revisions). Basically everything you need to know about imperative programming on CS bachelor level for free.



Yes - "Compilerbau" from German is literally "building compilers".


Any computer scientist worth his salt has read SICP. I had similar reactions as the author of the post to the expressiveness of Lisp, and it set me on a journey towards functional programming that has been so rewarding to my career. I also got a chance to meet some of the people who worked on Lisp machines at MIT, who interacted with Stallman back in the day. The Lisp machines were truly ahead of their time, and Lisp is still as beautiful in its homoiconic glory today as it was back in the 60s.

I now focus primarily on Rust, which has given me a lot of the same feelings as when I first encountered Lisp. It's truly a language ahead of its time, and I hope it will continue to grow beyond what even Lisp was able to accomplish.


For anyone who wants to feel what this might have felt like (and you've yet to watch them), MIT posted videos of Abelson and Sussman presenting the course on YouTube [0]. I felt a similar sense of magic the first time I saw the derivative section.

[0] https://www.youtube.com/watch?v=2Op3QLzMgSY


Differentiation/Integration implementation in Clojure:

https://gist.github.com/divs1210/4ca74577711eb996a89a36d86a3...


Scheme was my favorite language in school back in 2006, but the only class I ever used it in was a class where we learned search algorithms and the like (it was either AI or Robotics). I was dismayed to find out there isn't a lot of software written in LISP dialects, because I would love to write LISP code all day, it's so much fun.


> Hal went on to explain how the substitution model of evaluation worked in this example and I carefully watched. It seemed simple. It couldn't be that simple, though, could it? Something must be missing. I had to find out.

Well, not that simple... as the author hasn't taken the potential problem of free variable capture into account.


I was working on a webpage for translating SICP lecture, it is always something worth to split out. https://learningsicp.github.io


I wish so much that I had known about lisp in college. I only knew Java, C++, Javascript, and Python.


I feel like I am missing something here but are the things mentioned in the article (tail call recursion, first class functions) unique to LISP?

They seem available in other functional languages.

I was under the impression LISP was unique due to its macros.


Where do you think those languages got those features from? This story appears to take place in the 80s, incidentally.


Lisp: gnosticism for programmers.


my question is yes, lisp.. but what is beyond lisp?


Behind lisp is its creator, behind which is pure truth/mathematics, behind which is the divine creator, who is the ultimate base case: "I am that I am".


more lisp :)

Joking aside, I think lisp is nearly fundamental in the same way that c is.


LISP is more akin to a class of languages rather than a single language in isolation, so it is kind of trick to ask what's "beyond" LISP.


if we can solidly say what lisp is, then i'd assume there's beyond the horizon...if we can't say what lisp is.. then whats the point? is it just anything?

i mean, if i have a lisp with static types vs not static types, why is the lisp part important at all? and you just have two different languages one with static types and one without?

seems very unimportant? i dunno could someone enlighten me


Maybe higher kinded types? Monads, Monoids, Functors, etc etc


So the author was excited about the concept of function composition? I'm a fan of lisp but I'm not seeing what's particularly novel about lisp here.


Context is also helpful. “I’m not sure what’s novel about function composition” is a little bit of a smug take on a language and concept that was pretty cutting edge back in the early 80s. Now we Meat-and-Potatoes Programmers know the composition is useful, but I’d be hard pressed to find first-class composition in 1980 in any popular language that wasn’t {academic, Lisp}.


Was it, though? If I remember correctly, Algol 68 had facilities sufficient to do everything the author describes, and just as neatly.


Well, it was 1983. So seeing something like this in a programming language was more novel than it is now.

Also, up to that point the author has only programmed in a couple of different assembly languages, plus Basic. Coming at it from that perspective, there's plenty of novelty to be found in Lisp family languages.


It was novel to you when you first saw it, which is what he's describing.


Not function composition, but higher-order functions.


All this could be done easily in C (function pointers, factorial for loop etc.). And sampling a function twice at 0.0001 dx's apart and calling it "doing calculus" is a stretch.

The author seems to be making the case that, contrary to his original skepticism, Lisp is indeed some magical higher-order language, but I honestly don't see it. Lisp's ratio of philosophizing to noteworthy projects seems to tend toward infinity.


It can all be done on a Turing machine. It is a question of tools, and the cultures that build and use those tools, and the interactions between those two things.

Like others writing here and like the author of the post I, too, had my imagination fired by SICP and the culture it fits in. When I pick up a Learn to Program book and the examples are managing a used car dealer's inventory, I can only say "Thanks" to Abelson and Sussman. They turned me on to something exciting.


How could you create the equivalent semantics without closures and higher level functions in C? For example, if you wanted to take the second derivative of f

(deriv (deriv f))

or if you wanted to take the nth derivative

(define (nth-derivative f n)

    (if (= n 0)

        f

        (deriv (nth-derivative f (- n 1))))
You could do it with some data structures, which you'd then have to interpret, but not as nicely or easily.


    #define DX 0.0001
    typedef double (*func)(double);

    double NthDeriv(int n, func f, double x) {
        if (n == 1) {
            return (f(x + DX) - f(x)) / DX;
        }
        else {
            return (NthDeriv(n - 1, f, x + DX) - NthDeriv(n - 1, f, x)) / DX;
        }
    }

    double Cube(double x) { return x * x * x; }

    double result = NthDeriv(3, &Cube, 5.0);


Touche. It's not as general as the lisp example though.


modern c compilers will turn the first example iterative too.

but in 1983 this would have seemed magic.;


Maybe worth noting: when I tested a few years ago at least gcc (don't remember whether I tested clang) only applied TCO to small subprograms. As soon as the iterations were multiply nested and/or across multiple top-level functions (even if all of them in the same compilation unit) it would leave parts of the code still eating stack.


'Peaker pointed out that the GCC optimization involved in the factorial example is actually quite brittle: https://news.ycombinator.com/item?id=15698924


But the optimization there is rewriting the algorithm (which is 'brittle' because it depends on the operation used being commutative in that example).

Just optimizing tail calls as written in the programmer's code (which is what is normally meant with "TCO") don't need such rewriting, and would be trivial to implement via a jump if the back-end supports that, except C also needs to take into consideration structures on the stack, and perhaps other details, hence apparently gcc's handling isn't straight-forward even for plain TCO and fails to be applied in all applicable cases (or at least that's what happened ~4 years ago).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: