The article uses a simplified version of Scheme as a toy Lisp, for an exercise, which is an excellent choice, but a downside is that it could be misinterpreted as characterizing Scheme well.
First, there's what I think is an important distinction of idiomatic programming in Scheme, for which the first example in the article gives the wrong idea, with a noble lie: that Java code implies an immediate return, but the Scheme code does not.
Were that Scheme expression not in a tail position of a procedure (which it probably wouldn't be), it wouldn't have effect analogous to that of the Java code. And a one-armed `if` makes little sense in a tail position in modern Scheme, and looks erroneous.
Scheme is a full imperative algorithmic language, and doesn't have to be used in a pure-functional way, but idiomatic programming in Scheme tends to lean towards functional when practical. You almost never do premature returns quite like that in Scheme, for example. (One can do most any memory-safe thing one wants, including the less-idiomatic and even the outrageously insane, but hopefully over time one learns the idiomatic ways, rather than overuses/misuses less-idiomatic features, like leaning on a crutch they don't need and that's actually harmful to them.)
Separate from that, another thing possibly misleading about this article is that their Scheme implementation doesn't do what's sometimes called "proper implementation of tail calls" (which would be required to be a real Scheme). Idiomatic Scheme will tend to use tail calls heavily, with impunity, which would not be the case if a Scheme implementation let such calls exhaust resources.
The other things that "beginner" tutorials such as this do is they say "lisp" when they really mean "something resembling Scheme".
I started a project some time ago to write an elisp interpreter, mostly for fun. But since elisp is much more like Common Lisp than Scheme, it has a surprising amount more syntax than a Scheme does. The parser was much more complex than I initially thought it would be.
Not to take away from the side splitting hilariousness of this novel meme, but I just noticed Lisp is actually one of the few languages where you can't add spurious parentheses. If you find that a, (a), ((a)), etc are equivalent, it's probably not Lisp you're looking at.
In Scheme or Clojure, but not in Common Lisp. Now, should we adopt this parentheses-exacticity as part of the definition what is a real Lisp?
P.S. Of course, a Lisp-2 saves us only from an unlimited proliferation of parentheses, but a simple (defmacro foo () 'foo) will allow you to get a couple whenever really needed.
This is not better at least in the traditional ways Lisp is evaluated by. The scoping rules inherit Python’s confusing and conflated notions of binding vs mutation.
I've noticed you do these quite often, and I thoroughly approve. I just wonder, do you do all of them by hand, or have you automated the process by now?
I too was wondering this. Is this your full time job? Are you a bot? :-) Just kidding. I too find this useful - found some interesting discussions on the Constructing a Compiler thread from yesterday. Keep up the good work!
This is such a great post, and if you are new to Lisp or Compilers or PLT I would recommend investing time in it.
> I think it meets Alan Kay's 1972 claim that you could define the "most powerful language in the world" in "a page of code." (However, I think Alan would disagree, because he would count the Python compiler as part of the code, putting me well over a page.)
If anyone is curious, I have some ideas on how you could more precisely define this problem and an objective mathematical way of evaluating it. Shoot me an email if curious.
I wrote my own lisp interpreter in python and it was a very instructive experience, especially since you can get a simple working prototype in one day of hacking around.
Then I got stuck at recursively evaluating macros.
First, there's what I think is an important distinction of idiomatic programming in Scheme, for which the first example in the article gives the wrong idea, with a noble lie: that Java code implies an immediate return, but the Scheme code does not.
Were that Scheme expression not in a tail position of a procedure (which it probably wouldn't be), it wouldn't have effect analogous to that of the Java code. And a one-armed `if` makes little sense in a tail position in modern Scheme, and looks erroneous.
Scheme is a full imperative algorithmic language, and doesn't have to be used in a pure-functional way, but idiomatic programming in Scheme tends to lean towards functional when practical. You almost never do premature returns quite like that in Scheme, for example. (One can do most any memory-safe thing one wants, including the less-idiomatic and even the outrageously insane, but hopefully over time one learns the idiomatic ways, rather than overuses/misuses less-idiomatic features, like leaning on a crutch they don't need and that's actually harmful to them.)
Separate from that, another thing possibly misleading about this article is that their Scheme implementation doesn't do what's sometimes called "proper implementation of tail calls" (which would be required to be a real Scheme). Idiomatic Scheme will tend to use tail calls heavily, with impunity, which would not be the case if a Scheme implementation let such calls exhaust resources.