Thank you for this. There is occasionally disagreement about what "Common Lisp" even means, and the spec is often cited, but as far as all of my posts, library work, and application work are concerned, Common Lisp means "the current reality of the major compilers as implemented in 2025". This is a descriptive / bottom-up definition, and as an active author of software it is the one I'm more concerned with. For instance, `:local-nicknames` have been essentially universally implemented among the compilers, despite not being part of the spec. To me, this makes that feature "part of Common Lisp", especially since basically all CL software written today assumes its availability.
You're right to point out too that the post is somewhat SBCL-centric - this too reflects a descriptive reality that most new CL software is written with SBCL in mind first. Despite that I'd always encourage library authors to write as compatible code as possible, since it's really not that hard, and other compilers absolutely have value (I use several).
Every programming language has a practical definition: it is the intersection of the sets of features that are accepted by the various relevant production compilers and interpreted identically enough to be portable to all of them.
Formal language definitions, standards, and books are great, but you can't compile with them. Abstract language specs that don't have reference implementations or conformance test suites are not particularly useful to either implementors or users.
This was also my impression when reading the article, as someone who uses Sly heavily, every day. I can't imagine not having in-editor access to functionality like recompiling the function at point, or live evaluation of testing forms directly from the buffer. As Stew (the Clojure guy) pointed out in a video from a number of years ago, nobody should be typing anything raw into the in-editor REPL prompt; you should be sending forms directly from the code buffer.
How do I maintain that workflow if I'm to use native REPLs?
Thanks for this! I'll look into compressing the `.exe` down.
With regards to licensing, I think I'm okay. Raylib itself is permissive, and I own the rest of the dependencies (save two - one is MIT and the other is public domain).
I have being using Trial[1] for the past few weeks to test out game development in Common Lisp, and have been having a great time. Being able to alter (almost) all aspects of your game while it's running is a blessing.
Lisp languages seem well-suited for building games. The ability to evaluate code interactively without recompilation is a huge deal for feature building, incremental development, and bug-fixing. Retaining application state between code changes seems like it would be incredibly useful. Common Lisp also appears to be a much faster language than I would have blindly assumed.
The main downside for me (in general, not just for game programming) is the clunkiness in using data structures - maps especially. But the tradeoff seems worth it.
One of the downsides is that implementations like SBCL have a deep integration and need things like a well performing GC implementation - to get this running on specialized game hardware is challenging. The article describes that. Getting over the hurdle of the low-level integration is difficult. The reward comes, when one gets to the point, where the rapid incremental development cycles of Common Lisp, even with connected devices, kicks in.
For the old historic Naughty Dog use case, it was a development system written in Common Lisp on an SGI and a C++ runtime with low-level Scheme code on the Playstation.
> Common Lisp also appears to be a much faster language than I would have blindly assumed.
There are two modes:
1) fast optimized code which allows for some low-level stuff to stay with Common Lisp
2) unoptimized, but natively compiled code, which enables safe (-> the runtime does not crash) interactive and incremental development -> this mode is where much of the software can run nowadays and which is still "fast enough" for many use cases
Except for occasionally using a small embedded Scheme in C++ when I worked at Angel Studios, I haven’t much experience using Lisp languages for games.
That said I have a question: is it a common pattern when using Lisp languages for games to use a flyweight object reuse pattern? This would minimize the need for GC.
If that's your main downside, that's pretty good, since clunkiness is in many ways fixable. Personally with standard CL I like to use property lists with keywords, so a "map literal" is just (list :a 3 :b 'other). It's fine when the map is small. The getter is just getf, setting is the usual setf around the getter. There's a cute way to loop by #'cddr for a key-and-value loop, though Alexandria (a very common utility library) has some useful utils for looping/removing/converting plists as well.
If typing out "(list ...)" is annoying, it's a few lines of code to let you type {:a 3 :b 4} instead, like Clojure. And the result of that can be a plist, or a hash table, or again like Clojure one of the handful of immutable map structures available. You can also easily make the native hash tables print themselves out with the curly bracket syntax.
(On the speed front, you might be amused by https://renato.athaydes.com/posts/how-to-write-slow-rust-cod... But separately, when you want to speed up Lisp (with SBCL) even more than default, it's rather fun to be able to run disassemble on your function and see what it's doing at the assembly level, and turn up optimization hints and have the compiler start telling you (even on the individual function level) about where it has to use e.g. generic addition instead of a faster assembly instruction because it can't prove type info and you'll have to tell it/fix your code. It can tell you about dead code it removed. You can define stack-allocation if needed. Simple benchmarking that also includes processor cycles and memory allocated is available immediately with the built-in time macro...)
Things have costs, what's your underlying point? That one shouldn't create such a macro, even if it's a one-liner, because of unquantified costs or concerns...?
Singling out individual macros for "cost" analysis this way is very weird to me. I disagree entirely. Everything has costs, not just macros, and if you're doing an analysis you need to include the costs of not having the thing (i.e. the benefits of having it). Anyway whether it's a reader macro, compiler macro, or normal function, lines of code is actually a great proxy measure to all sorts of things, even if it can be an abused measure. When compared to other more complex metrics like McCabe's cyclomatic complexity, or Halstead’s Software Science metrics (which uses redundancy of variable names to try and quantify something like clarity and debuggability), the correlations with simple lines of code are high. (See for instance https://www.oreilly.com/library/view/making-software/9780596... which you can find a full pdf of in the usual places.) But the correlations aren't 1, and indeed there's an important caveat against making programs too short. Though a value you didn't mention which I think can factor into cost is one of "power", where shorter programs (and languages that enable them) are generally seen as more powerful, at least for that particular area of expression. Shorter programs is one of the benefits of higher level languages. And besides power, I do think fewer lines of code most often corresponds to superior clarity and debuggability (and of course fewer bugs overall, as other studies will tell you), even if code golfing can take it too far.
I wouldn't put much value in any cost due to a lack of adoption, because as soon as you do that, you've given yourself a nice argument to drop Lisp entirely and switch to Java or another top-5 language. Maybe if you can quantify this cost, I'll give it more thought. It also seems rather unfair in the context of CL, because the way adoption of say new language features often happens in other ecosystems is by force, but Lisp has a static standard, so adoption otherwise means adoption of libraries or frameworks where incidentally some macros come along for the ride. e.g. I think easy-route's defroute is widely adopted for users of hunchentoot, but will never be for CL users in general because it's only relevant for webdev. And fare's favorite macro, nest, is part of uiop and so basically part of every CL out there out of the box -- how's that for availability if not adoption -- but I think its adoption is and will remain rather small, because the problem it solves can be solved in multiple ways (my favorite: just use more functions) and the most egregious cases of attacking the right margin don't come up all that often. Incidentally, it's another case in point on lines of code, the CL implementation is a one liner and easy to understand (and like all macros rather easy to test/verify with macroexpand) but the Scheme implementation is a bit more sophisticated: https://fare.livejournal.com/189741.html
What's your cost estimate on a simple version of the {} macro shown in https://news.ycombinator.com/item?id=1611453 ? One could write it differently, but it's actually pretty robust to things like duplicate keys or leaving keys out, it's clear, and the use of a helper function aids debuggability (popularized most in call-with-* macro expansions). However, I would not use it as-is with that implementation, because it suffers from the same flaw as Lisp's quote-lists '(1 2 3) and array reader macro #(1 2 3) that keep me from using either of those most of the time as well. (For passerby readers, the flaw is that if you have an element like "(1+ 3)", that unevaluated list itself is the value, rather than the computation it's expressing. It's ugly to quasiquote and unquote what are meant to be data structure literals, so I just use the list/vector functions. That macro can be fixed on this though by changing the "hash `,(read-..." text to "hash (list ,@(read-...)". I'd also change the hash table key test.)
Please try to respond to my argument without 1) straw-manning it, 2) or reading a bunch into it that isn't there.
You made a point about the macro only costing a few lines of code. That is not a useful way to look at macros, as I can attest having written any number of short macros that I in retrospect probably shouldn't have written, and one or two ill-conceived attempts at DSLs.
Sometimes fewer lines of code is not better. Code golfing is not, in and of itself, a worthy engineering goal. The most important aims to abstraction are clarity and facility, and if you do not keep those in mind as you're shoving things into macros and subroutines and code-sharing between different parts of the codebase that should not be coupled, you are only going to lead you and your teammates to grief.
Things have costs. Recognize what the costs are. Use macros judiciously.
I started with my two questions not to strawman, but to find out if there was some underlying point or argument you had in mind that prompted you to make such a short reply in the first place. All I could read in it was not an argument, but a high level assertion, and not any sort of call to action. That's fine, I normally would have ignored it, but I felt like riffing on my disagreement with that assertion. To reiterate, I think you can reasonably measure cost through lines of code, even if that shouldn't be the only or primary metric, and I provided some outside-my-experience justifications, including one that suggests that an easy to measure metric like lines of code correlates with notoriously harder to measure metrics like the three things you stated. (If cost is to be measured by clarity -- how do you even measure clarity? Halstead provides one method, it's not the only one, but if we're going to use the word "measure", I prefer concrete and independently repeatable ways to get the same measurement value. Sometimes the measurement is just a senior person on a team saying something is unclear, often if you get another senior's opinion they'll say the same thing, but it'd be nice if we could do better.)
Now you've expanded yourself, thanks. I mostly agree. Quibble around size is "not a useful way" -- a larger macro is more likely to be more complex, difficult to understand, buggy, and harder to maintain, so it better be enabling a correspondingly large amount of utility. But it doesn't necessarily have to be complex, it could just be large but wrapping a lot of trivial boilerplate. DSL-enabling macros are often large but I don't think they justify themselves much of the time. And I've also regretted some one-line macros. Length can't be the only thing to look at, but it has a place. I'd much rather be on the hook for dealing with a short macro than a large one. Independent of size, I rather dislike how macros in general can break interactive development. What's true for macros is that they're not something to spray around willy-nilly, it's a lot less true to say the same about functions.
If you asked, I don't think I'd have answered that those two things are the most important aims to abstraction, but they're quite important for sure, and as you say the same problems can come with ill-made subroutines, not just macros. I agree overall with your last two paragraphs, and the call to action about recognizing costs and using macros judiciously. (Of course newbies will ask "how to be judicious?" but that's another discussion.)
That's not implementing a literal (an object that can be read), but a short hand notation for constructor code. The idea of a literal is that it is an object created at read-time and not at runtime.
In Common Lisp every literal notation returns an object, when read -> at read-time. The {} example does not, because the read macro creates code and not a literal object of type hash-table. The code then needs to be executed to create an object -> which then happens at runtime.
> literal adj. (of an object) referenced directly in a program rather than being computed by the program; that is, appearing as data in a quote form, or, if the object is a self-evaluating object, appearing as unquoted data. ``In the form (cons "one" '("two")), the expressions "one", ("two"), and "two" are literal objects.''
CL-USER 4 > (read-from-string "1")
1
1
CL-USER 5 > (read-from-string "(1 2 3)") ; -> which needs quoting in code, since the list itself doubles in Lisp as an operator call
(1 2 3)
7
CL-USER 6 > (read-from-string "1/2")
1/2
3
CL-USER 7 > (read-from-string "\"123\"")
"123"
5
CL-USER 8 > (read-from-string "#(1 2 3)")
#(1 2 3)
8
But the {} notation is not describing a literal, it creates code, when read, not an object of type hash-table.
This also means that (quote {:a 1}) generates a list and not a hash-table when evaluated. A literal can be quoted. The QUOTE operator prevents the object from being evaluated.
In above example there is no hash-table embedded in the code. Instead each call to FOO will create a fresh new hash-table at runtime. That's not the meaning of a literal in Common Lisp.
Thanks for the clarification on the meaning of "literal" in Common Lisp, I'll try to keep that in mind in the future. My meaning was more in the sense of literals being some textual equivalent representation for a value. Whether or not computation behind the scenes happens at some particular time (read/compile/run) isn't too relevant. For example in Python, one could write:
a = list()
a.append(1)
a.append(2)
a.append(1+3)
You can call repr(a) to get the canonical string representation of the object. This is "[1, 2, 4]". Python's doc on repr says that for many object types, including most builtins, eval(repr(obj)) == obj. Indeed eval("[1, 2, 4]") == a. But what's more, Python supports a "literal" syntax, where you can type in source code, instead of those 4 lines:
b = [1, 2, 1+3]
And b == a, despite this source not being exactly equal at the string-diff level to the repr() of either a or b. The fact that there was some computation of 1+3 that took place at some point, or in a's case that there were a few method calls, is irrelevant to the fact that the final (runtime) value of both a and b is [1, 2, 4]. That little bit of computation of the element is usually expected in other languages that have this sort of way to specify structured values, too, Lisp's behavior trips up newcomers (and Clojure's as well for simple lists, but not for vectors or maps).
Do you have any suggestions on how to talk about this "literal syntax" in another way that won't step on or cause confusion with the CL spec's definition?
> Whether or not computation behind the scenes happens at some particular time (read/compile/run) isn't too relevant.
Actually it is relevant: is the object mutable? Are new objects created? What optimizations can a compiler do? Is it an object which is a part of the source code?
If we allow [1, 2, (+ 1 a)] in a function as a list notation, then we have two choices:
1) every invocation of [1, 2, (+ 1 a)] returns a new list.
2) every invocation of [1, 2, (+ 1 a)] returns a single list object, but modifies the last slot of the list. -> then the list needs to be mutable.
(defun foo (a)
[1, 2, (+ 1 a)])
Common Lisp in general assumes that in
(defun foo (a)
'(1 2 3))
it is undefined what exact effects the attempts to modify the quoted list (1 2 3) has. Additionally the elements are not evaluated. We have to assume that the quoted list (1 2 3) is a literal constant.
Thus FOO
* returns ONE object. It does not cons new lists at runtime.
* modifying the list may be not possible. A compiler might allocate such an object in a read-only memory segment (that would be a rate feature -> but it might happen on architectures like iOS where machine code is by default not mutable).
* attempts to modify the list may be detected.
SBCL:
* (let ((a '(1 2 3))) (setf (car a) 4) a)
; in: LET ((A '(1 2 3)))
; (SETF (CAR A) 4)
;
; caught WARNING:
; Destructive function SB-KERNEL:%RPLACA called on constant data: (1 2 3)
; See also:
; The ANSI Standard, Special Operator QUOTE
; The ANSI Standard, Section 3.7.1
;
; compilation unit finished
; caught 1 WARNING condition
(4 2 3)
* attempts to modify literal constants may modify coalesced lists
In above function, a file compiler might detect that similar lists are used and allocate only one object for both variables.
The value of (foo) can be T, NIL, a warning might be signalled or an error might be detected.
So Common Lisp really pushes the idea that in source code these literals should be treated as immutable constant objects, which are a part of the source code.
Even for structures: (defun bar () #S(PERSON :NAME "Joe" :AGE a)) -> A is not evaluated, BAR returns always the same object.
> Do you have any suggestions on how to talk about this "literal syntax" in another way that won't step on or cause confusion with the CL spec's definition?
Actually I was under the impression that "literal" in a programming language often means "constant object".
Though it's not surprising that language may assume different, more dynamic, semantics for compound objects like lists, vectors, hash tables or OOP objects. Especially for languages which are focused more on developer convenience, than on compiler optimizations. Common Lisp there does not provide an object notation with default component evaluation, but assumes that one uses functions for object creation in this case.
Yeah, again I meant irrelevant to those who share such a broader ("dynamic" is a fun turn of phrase) definition of "literal" as I was using, it's very relevant to CL. I thought of mentioning the CL undefined behavior around modification you brought up explicitly in the first comment as yet another reason I try to avoid using #() and quoted lists, but it seemed like too much of an aside in an already long aside. ;) But while in aside-mode, this behavior I really think is quite a bad kludge of the language, and possibly the best thing Clojure got right was its insistence on non-place-oriented values. But it is what it is.
Bringing up C is useful because I know a similar "literal" syntax has existed since C99 for structs, and is one of the footguns available to bring up if people start forgetting that C is not a subset of C++. Looks like they call it "compound literals": https://en.cppreference.com/w/c/language/compound_literal (And of course you can type expressions like y=1+4 that result in the struct having y=5.) And it also notes about possible string literal sharing. One of the best things Java got right was making strings immutable...
> The ability to evaluate code interactively without recompilation
SBCL and other implementations compile code to machine code then execute it. That is to say, when a form is submitted to the REPL, the form is not interpreted, but first compiled then executed. The reason execution finishes quickly is because compilation finishes quickly.
There are some implementations, like CCL, with a special interpreter mode exclusively for REPL-usage.[1] However, at least SBCL and ECL will compile code, not interpret.
I specifically talk about the fast evaluator for SBCL. But even without that contrib, SBCL does have another evaluator as well that's used in very specific circumstances.
I think a lot of this is confusion between online versus batch compilation? Most of us have only ever seen/used batch compilation. To that end, many people assume that JIT in an interpreter is how online compilation is done.
I probably am more guilty of that than I should be.
I confess I wasn't positive what the correct term would be. "Online" is common for some uses of it. And I "knew" that what we call compilation for most programs used to be called "batch compilation." Searching the term was obnoxious, though, such that I gave up. :(
There are 1980's papers about Lisp compilers competing with Fortran compilers, unfortunately with the AI Winter, and the high costs of such systems, people lost sight of it.
Well, I imagine at the time they had some LISP implementations that were very well tuned for specific high end machines, which essentially duplicated Fortran functionality. This is difficult to do for general purpose Lisps like SBCL. It was also probably very expensive.
There are some libraries that make maps and the like usable with a cleaner syntax. You too could make some macros of your own for the same purpose, if syntax is the concern
This is the Trial game engine[1]. I've been using it recently to making a small game[2], and can say it's quite nice to be able to hot reload functions to test logic, collisions, etc. without having to restart the game. The Trial author has been working on porting it to the Switch to expand the reach of the game Kandria[3] (and other future CL games).
You're right to point out too that the post is somewhat SBCL-centric - this too reflects a descriptive reality that most new CL software is written with SBCL in mind first. Despite that I'd always encourage library authors to write as compatible code as possible, since it's really not that hard, and other compilers absolutely have value (I use several).