As somebody who read a couple of the author's books, and also somebody who spent almost a decade studying compilers, I am genuinely curious about the author himself.
These works are something I both understand and would never achieve myself. These are cultural artifacts, like deeply personal poetry, made purely for the process of it. Not practically useful, not state of the art, not research level, but... a personal journey?
If the author is reading this... can you share your vision? Motivation?
Thank you so much for reading my books and describing my work in such
beautiful words! You basically answered your own question! My motivation
is just the creation of something I find beautiful. The vision, to pass
knowledge to those who seek it in the simplest possible way, where
"simple" does not necessarily mean in the tersest form, but in a form
that invites being digested.
I do not usually talk much about "myself". I tried, but with no-one
asking, I find it difficult to say anything.
I just bought ePubs for your Raja Yoga Revisted (I usually study SRF material, but alternatives are good!) and Scheme 9 for Empty Space. Your web site is very nice, I loved the ‘Who am I?’ page. I have been using Lisp languages since 1978 but except for studying Peter Norvig’s Lisp in Python, I have never dropped below the abstraction layer into a Lisp implementation so I am looking forward to that.
+1, long time follower of nmh's work. His books are brief and concise, but carry a peculiar "something", a precision of expression etc that is hard to put into words - but can often be noticed in long-time practicioners of some mental teaching. :)
It is always interesting to spot a person on the interwebs who seems to actually have managed to turn buddhist or some other teachings into real world deeds. Living really modestly (IIRC, he/you also uses modest, underclocked laptops?), publishing for the benefit of many, and doing all this for years and years. Like, there seems to be no "overhead" in this way of living. Hugely inspirational.
I would also point out the "Essays" section on nmh's webpage, especially the ones discussing sensitivity and high IQ: https://t3x.org/#essays
Having purchased several of your books, thanks for your work, nmh!
Turning the Buddhist (or other) teachings into deeds
is not too hard once you have understood who you are,
and, maybe more importantly, who you are not. Figuring
/that/ out can be tough and require a lot of practice.
What people perceive as modest is really an acceptance
or even appreciation of what is. My apartment has not
been renovated in decades, I repair what needs repair and
otherwise leave things to themselves. I wear clothes until they
disintegrate, and my hardware is already old when I
buy it. This is the course of things. Things age and
change and at some point disappear. Why prefer the new
over the old? Why the old over the new? It is just that
things and beings get old on their own, and it is much
more joyful to witness this than trying to resist it.
Many thanks to everybody who is wrote in this thread! Your words mean a lot to me! I will reply to some individual messages. If I don't, please substitute "thank you!" :)
> These are cultural artifacts, like deeply personal poetry, made purely for the process of it. Not practically useful, not state of the art, not research level, but... a personal journey?
I can't speak for the author but this is exactly how I look at the lisp I'm developing. It's a lifetime project. I had some kind of vision depicting how different things could be, and at some point I started trying to make it happen. I want to convince myself I'm not insane for thinking it was possible in the first place.
I love it so much, and seeing your bibliography makes me feel like a kid in a candy store. The confluence of Asian philosophy and computing is delightful.
Where is the bibliography? (I searched for it, but couldn't find it, expecting to find a list of books which the author referenced in writing/researching)
Purchased the author's `Scheme 9 from Empty Space` book and loved it. Lots of very well-commented and explained code, on how to build a language up from the beginning. So much fun.
Looks awesome. Just ordered a copy. I'm just now picking up Peter Seibel's Practical Common Lisp again and taking another stab at immersing myself in the world of Lisp. So this is perhaps fortuitous timing.
I love Lisp (I'm an Emacs user and often write in Racket for personal projects) but the one thing I never understood about the Lisp community is the emphasis placed on metacircular evaluators.
I sure find them beautiful and all, but why do they take center stage so often? Beside the aesthetics and instructional value, I don't get the appeal. Also I feel that a bunch of the heavy lifting behind metacircular evaluators is actually done by the Polish notation syntax as well as the actual implementation, and these concepts don't get nearly as much love.
But of course you must close the loop by representing Maxwell's equations electromagnetically.
I know this is a classic analogy, but now you've got me wondering, originally Maxwell wrote a messy pile of equations of scalrs, later someone (Gibbs?) gave them the familiar vector calculus form. Nowadays we have marvellously general and terse form, like (using the differential of the Hodge dual in naturalised units),
d star(F) = J
My question is, when are we going to get some super-compact unified representation of `eval`?
It is a bit similar. The first versions in Lisp 1.0 and the Lisp 1.5 paper were working, but not fully refined. SICP and others later presented it a bit more refined in my opinion.
There's also version for the metacirculator interpreter written in full on M-expr, but they kinda break the spirit of things.
I think the version of eval that we have is already pretty terse for what it is. You could maybe code-golf it into something smaller, or you could code-golf it into something fully immutable.
My only gripe is that they all rely on an already existing reader that parses the expressions for you and represents them. Which is exactly what the book is about.
Finding a small enough interpretation that does ALL of it would be a dream, but I doubt it could be anywhere near as concise as the (modern) Maxwell equations.
It's hard to get a super compact eval for LISP with its many primitives.
It's somewhat easier for the lambda calculus which inspired LISP, with obnly the 3 primitives of variable, abstraction, and application. In the binary lambda calculus this allows for a self-interpreter
that tokenizes and parses a closed lambda term from a raw binary input stream and passes the term and the remainder stream to a given continuation [1].
>later someone (Gibbs?) gave them the familiar vector calculus form.
It was Oliver Heaviside (https://en.wikipedia.org/wiki/Oliver_Heaviside) that rewrote Maxwell's original equations (20 of them in differential form) into the notation used today (4 of them in vector calculus form).
I clicked on this and immediately wanted to buy it. But then someone in the comments said to also look at your other books and well damn, now I want to read all of them and I can't choose which to start with.
That's super helpful! I downloaded the samples for S9fES and LfN, checking those out first. I love how generous you are with the number of pages in the free samples by the way.
Thanks. I recently had to reinvent LISP to script my CRDT database.
That was not much work, because I already had the notation (I use RDX, a JSON superset with CRDT types).
Still, I stumbled at the idiosyncratic LISP bracketing. Luckily, RDX allows for different tuple notations. So, I styled it to look less alien to a curly-braced developer. Like this https://github.com/gritzko/go-rdx/blob/main/test/13-getput.j...
For example, print change-dir make-dir; is equivalent to (print (change-dir (make-dir) ) ) in the old money. I wonder if I am reinventing too much here.
Did LISPers try to get rid of the brackets in the past?
There have been many attempts to get rid of sexprs in favor of a “better” syntax. Even John McCarthy, the inventor (discoverer?) of Lisp had plans for an “M-expression” syntax to replace “S-expressions.” It never happened. The secret is that Lispers actually view sexprs as an advantage, not something to be worked around. Once you discover symbolic editing and code manipulation based on sexprs, you’ll never go back to weak line editing. That said, some Lisp dialects (e.g. Clojure and Racket) have embraced other symbols like square and curly brackets to keep the code more terse overall and optically break up longer runs of parentheses.
Probably the best example of a “Lisp without parentheses” is Dylan. Originally, Dylan was developed as a more traditional Lisp with sexprs, but they came up with a non-sexr “surface syntax” before launching it to avoid scaring the public.
Exactly. I also like Clojure’s use of square brackets for vectors and curly braces for maps. It eliminates all the “vector-” and “map-” function calls.
Those are big quality of life improvements. I wish the other lisps would follow suit. I suppose I could just implement them myself with some macros, but having it standard would be sweet.
The Revised Revised Revised Revised Revised Revised Report on the Algorithmic Programming Language Scheme (R6RS) specified that square brackets should be completely interchangeable with round brackets, which allows you to write let bindings or cond clauses like so:
(let ([a (get-some-foo 1)]
[b (get-some-foo 2)])
(cond [(> a b) -1]
[(< a b) 1]
[else 0]))
...but I hate that, I'd much prefer if square brackets were only used for vectors, which is why I have reader macros for square brackets -> vectors and curly brackets -> hash tables in my SBCL run commands.
I think the R6S behavior helps with visual matching, but squanders using square brackets for something more useful (e.g. vectors), which is a shame. Another thing Clojure does is copy Arc in eliminating parentheses around the pairs of forms in let bindings and cond forms, which really aren’t needed. It just expects pairs of forms and the compiler objects if given an odd number. The programmer can use whitespace (notably newlines) to format the code so the pairings are visibly apparent. That reduces a surprising amount of needless parentheses because let binding forms are used all over (less so cond forms).
True, with modern machine-generated mass-operations refactoring is easier than with older tools, but that doesn't mean a given set of brackets is 'useless'.
I wouldn’t go as far as “pretty awful,” but yes, it’s a keystroke more to manipulate two sequential forms instead of one. And yes, there is a slight indentation advantage when the test and the conditional code won’t fit on the same line. It’s easy enough to use “do” when the conditional clause has multiple forms, however. Personally, I’ll take those trade offs for the reduction in clutter.
I think it's a matter of whether you're programming in a mostly applicative way† or in a more imperative way. Especially in the modern age of generational GC, Lisp cons lists support applicative programming with efficient applicative update, but sacrifice efficiency for certain common operations: indexing to a numerical position in a large list, appending to a list, or doing a lookup in a finite map such as an alist. So, in Common Lisp or Scheme, we are often induced to use vectors or hash tables, sacrificing applicative purity for efficiency—thus Perlis's quip about how purely applicative languages are poorly applicable, from https://www.cs.yale.edu/homes/perlis-alan/quotes.html.
In general a sequence of expressions of which only the value of the last is used, like C's comma operator or the "implicit progn" of conventional cond and let bodies, is only useful for imperative programming where the non-last expressions are executed for their side effects.
Clojure's HAMTs can support a wider range of operations efficiently, so Clojure code, in my limited experience, tends to be more purely applicative than code in most other Lisps.
Incidentally, a purely applicative finite map data structure I recently learned about (in December 02023) is the "hash trie" of Chris Wellons and NRK: https://nullprogram.com/blog/2023/09/30/. It is definitely less efficient than a hash table, but, in my tests so far, it's still about 100ns per hash lookup on my MicroPC and 250ns on my cellphone, compared to maybe 50ns or 100ns respectively for an imperative hash table without FP-persistence. It uses about twice as much space. This should make it a usable replacement for hash tables in many applications where either FP-persistence, probabilistically bounded insertion time, or lock-free concurrent access is required.
This "hash trie" is unrelated to Knuth's 01986 "hash trie" https://www.cs.tufts.edu/~nr/cs257/archive/don-knuth/pearls-..., and I think it's a greatly simplified HAMT, but I don't yet understand HAMTs well enough to be sure. Unlike HAMTs, it can also support in-place mutating access (and in fact my performance measurements above were using it).
______
† sometimes called "functional", though that can alternatively refer to programming with higher-order functions
I sometimes wonder if the issue is really the parentheses or the ease of nesting. In LISP it’s natural to write
(f (g (h x))).
Whereas most people are used to.
a = h(x);
b = g(a);
c = f(b);
In C/C++ most functions return error codes, forcing the latter form.
And then there are functional languages allowing:
x -> h -> g -> f
but I think the implicit parameter passing doesn’t sit well with a lot of programmers either.
Interesting comment. I found the lisp/sexpr form instantly understandable. While the others weren't hard to grasp it took a moment to consciously parse them before their meaning was as clear. Perhaps the functional arrow notation is least appreciated because it's seems more abstract or maybe the arrows are just confusing.
More likely than not it's a matter of what a person gets used to. I've enjoyed working in Lisp/Scheme and C, but not so much in primarily functional languages. No doubt programmers have varied histories that explain their preferences.
As you imply, in C one could write nested functions as f (g (h (x))) if examining return values is unnecessary. OTOH in Lisp return values are also often needed, prompting use of (let ...) forms, etc., which can make function nesting unclear. In reality programming languages are all guilty of potential obscurity. We just develop a taste for what flavor of obscurity we prefer to work with.
Has anyone here read his “Practical Compiler Construction”? It’s on of the shorter compiler books Ive seen, seems like it might be a good way to learn a bit more about assembly
I was very curious about this too. I've had my finger hovering over the "buy" button for months but there are next to no reviews on it. I'm wondering how it differs from other, similar works
There are always the sample chapters, and the code from the book is in the public domain. :)
The book is basically a modern and more complete version of the "Small C Handbook" of the 1980's. I goes through all the stages of compilation, including simple optimizations, but keeps complexity to a minimum. So if you just want to learn about compiler writing and see what a complete C compiler look like under the hood, without investing too much into theory, then this is probably one of very few books that will deliver.
Edit: and then Warren Toomey has written "A Compiler Writing Journey" based on PCC, which may shed a bit more light on the book: https://github.com/DoctorWkt/acwj
Under “The Intended Audience” (page 10 of the PDF sample on the site), it says that this is not an introduction to LISP and that it would be more enjoyable with some prerequisites.
Where does one — who has no knowledge of these prerequisites or about LISP (except that the latter has been heard in programming circles as something esoteric, extremely powerful, etc.) — start, before reading this book?
There's ANSI Common Lisp by Paul Graham. I've never read it and I'm not sure it's the best introduction but thumbing through it I don't see how you can get any more basic than that.
When I was a beginner, A Gentle Introduction to Symbolic Computation worked for me. As the title suggests, it gently introduces concepts in a very beginner friendly manner, so even macros are easy enough to grasp by the time you get there. The diagrams and examples are great.
If you prefer hands-on learning, How to Design Programs is pretty good resource for the foundations, with lots of examples and exercises: https://htdp.org
But learning the basics of lisp is more like a side effect, the focus is on program design.
One source of awe people have with the idea of Lisp is how much you can build off of so little. I like pg's Roots of Lisp paper on that https://justine.lol/sectorlisp/jmc.pdf The core thing was the meta-circular evaluator (eval) in the original Lisp paper. You can work through it or try re-implementing it in something else. I like this recent tiny version https://justine.lol/sectorlisp2/
Another source of awe is about Lisp being more of a programming system than a language, and Common Lisp was the standardization of a lot of efforts towards that by companies making large and industrial pieces of software like operating systems, word processors, and 3D graphics editors. At the language level, "compile", "compile-file", "disassemble", "trace", "break", "step" are all functions or macros available at runtime. When errors happen, if there's not an explicit handler for it (like an exception handler) then the default behavior isn't to crash but to trigger the built-in debugger. And the stack isn't unwound yet, you can inspect the local variables at every layer. (There's very good introspection in general for everything.) Various restarts will be offered at different parts of the stack -- for example, a value was unknown, so enter it now and continue. Or you can recompile your erroneous function and restart execution at one of the stack frames with the original arguments to try again. Or you can apt-get install some foreign dependency and try reloading it without having to redo any of the effort the program had already made along the way.
Again, all part of the language at runtime, not a suite of separate tools. Implementations may offer things beyond this too, like SBCL's code coverage or profiling features. All the features of the language are designed with this interactivity and redefinability in mind though -- if you redefine a class definition, existing objects will be updated, but you can control that more finely if you need to by first making a new update-instance-for-redefined-class method. (Methods aren't owned by classes, unlike other OOP languages, which I think eliminates a lot of the OOP design problems associated with those other languages.)
I like the book Successful Lisp as a tour of Common Lisp, it's got a suggested reading order in ch 2 for different skill levels: https://dept-info.labri.fr/~strandh/Teaching/MTP/Common/Davi... It's dated in parts as far as tooling goes but if you're mostly interested in reading about some bits rather than actively getting into programming with Lisp that's not so bad. If you do want to get into it, https://lispcookbook.github.io/cl-cookbook/ has some resources on getting started with a Lisp implementation and text editor (doesn't have to be emacs).
Can anyone compare this with Queinnec's Lisp in Small Pieces? I was waiting for an English version of the 2nd edition but I guess it's never happening and my French has unfortunately regressed since then.
LISP in Small Pieces discusses very sophisticated techniques, while LISP From Nothing is more about the quirks and implementations of early LISP. Of course you can write a modern LISP based on the things covered in LFN, but if you are planning to write more than a toy, then Queinnec's book is the one to read.
tug2024 wrote:
> Doesn’t lisp extend lambda calculus (abstraction . application)? As a consequence, lisp (abstraction . application . environment)!
Another valid question downvoted into oblivion.
The environment in (lexically scoped) LISP is an implementation detail. Lambda calculus does not need an environment, because variables are substituted on a sheet of paper. So lambda calculus equals lexically scoped LAMBDA in LISP.
Sure, you could view LISP as LC plus some extra functions (that are not easily implemented in LC).
These works are something I both understand and would never achieve myself. These are cultural artifacts, like deeply personal poetry, made purely for the process of it. Not practically useful, not state of the art, not research level, but... a personal journey?
If the author is reading this... can you share your vision? Motivation?