We use it for stock market analysis over at https://feetr.io. Honestly, I couldn't imagine a better language as it makes large tasks almost effortless. Plus the fact that I can connect to the running image and query data feels like magic and makes farming content for social media almost facile as I can pipe some of the data through cl-table and just post a screenshot.
We don't really overcomplicate it. A merge into development will pull the code onto the development server and then either restart lisp or recompile (depends on the application), and the same thing will happen when merged to prod.
We're currently experimenting with Guix for server configuration and it's nice but sometimes the context switch from CL to Guile does trip you up for 0.0001ms.
I think that the only difference with CL vs another language is that we can connect to the running image and create/recompile functions, and that can land you in a scenario where you think you have committed a function because it works on the image and so you build on it, then the image restarts and suddenly a whole bunch of code doesn't work. That will happen exactly once before you draw up a guide on how the team is able to use Sly/Slime.
Interesting, I had forgotten about Guix and now is just the right time for me to try it out, thanks for reminding me.
> Me. It was me. I didn't commit the function
Haha. Yeah, modifying "live" systems without a strict mindsets tends to cause that, the worse is linux images running on production VMs which are partially specified by e.g. Ansible but there are little modifications here and there that are done by one person or the other and when that machine needs to be created you get a nice surprise.
Guix is in a really good place right now and I'd have no reservations recommending it. Of course, there is the initial onboarding experience which can be tough if it's a completely new concept but overcoming that is 100% worth it IMO.
> the worse is linux images running on production VMs which are partially specified
I could absolutely see that being worse in a really big and scary way. Lisp is great because if you try to call a function which doesn't exist, it'll scream bloody murder at you, so we got lucky that it was immediately picked up. But that's lisp for you, incredibly nice and easy to use.
`cloc` has it at 24k lines of code so it's by no means a huge project but it's big enough where I feel like we would have encountered a large variety of issues. As of yet there hasn't been anything major. We have felt the lack of libraries at times but that just means we need to write more lisp, which is a good thing as lisp is fun.
Honestly, I don't know that I'd classify our use of CL as dynamic. We're happy customers of `deftype` and `declaim`. While it's true that not every function makes use of them, most of them do. So in that regard, I can't comment but that's the beauty of lisp: it's the language that you need it to be.
Yeah, we use CCL for local development and SBCL for running code on servers. I hear that the development story is better with CCL due to improved error messages but I'm not sure how much I agree with that, however we continue the practice because it ensures that we're writing portable code and aren't tied to a single implementation.
The old joke about lisp ruining all other languages holds true (at least in my experience), so it's a monkeys paw wish of yes, you get to work with lisp but you'll never be able to enjoy another job again.
Although Lisp spoils you for other languages, but it also crystalizes and clarifies the other languages. As you enjoy the latter, the former will fade.
By clarifies what that means is that if something is being done badly in some language, your mind has a reference model for that being done well. That model can guide you around the distracting crap. It also gives you a vocabulary for talking about and thinking about it.
clojurians seems to still be active and somehow numerous, datascience subgroups held regular online meetups not long ago. some would say clojure is not true-lisp but well.
Clojure and ClojureScript communities are thriving. Our product, orgpad.com is written completely in those two. I have written about some of the technologies we use before.
There are some other explanations given in sibling comments. They may be right, but there's another point that may also explain some of this sentiment.
In Common Lisp and its antecedents, a "list" is a chain of cons cells, as mentioned by some of the sibling comments, but that's not all. Another important point is that in Common Lisp and its antecedents, source code is not made of text strings; it's made of Lisp data structures--atoms and chains of cons cells.
A text file containing "Common Lisp source code" does not actually contain Common Lisp source code. It contains a text serialization of Common Lisp source code (one of many possible text serializations, actually). It isn't actually Common Lisp source code until the reader gets done with it.
This might sound like a trivial pedantic point, but it isn't. Because Common Lisp source code is made up of standard Common Lisp data types, its standard library provides everything you need to walk arbitrary source code, deconstruct it, transform it, construct it, and compile it. Those features are all built into the language in a way that they are not in most languages.
For those of us who are accustomed to using those features regularly, working with a language that lacks them is such an impoverished experience that I can understand folks objecting that "that's not a proper Lisp." I don't tend to make that objection myself, but I do understand it.
If your Lisp does not represent its source code in this way, or if it doesn't even have the data structures that source code is made of in Common Lisp and its antecedents, then there is a nontrivial sense in which it's not a Lisp--or at least not the kind of Lisp that those older ones are.
>In Common Lisp and its antecedents, a "list" is a chain of cons cells, as mentioned by some of the sibling comments, but that's not all. Another important point is that in Common Lisp and its antecedents, source code is not made of text strings; it's made of Lisp data structures--atoms and chains of cons cells.
Source code in Clojure is also not made of text strings, it also reads text as the serialization of data structures, which are then interpreted as source.
The difference is, Clojure uses data structures other than cons cells for source.
What do you see as particularly important about cons cells? What advantages do they give what some might call a "real LISP" over Clojure, which, I'd argue, smartly abstracts around more modern data structures like vectors, maps, sequences, collections as opposed to being "married" to cons cells as Rich Hickey once put it?
Common Lisp uses cons cells. So do a bunch of older Lisps whose design fed into the design of Common Lisp. That's all. There's nothing else special about cons cells in my mind.
Cons cells are not the important point--at least not for me. The important point in this context is that expressions are represented by something better (that is, more conveniently-structured) than flat text strings, and that the representation be a standard data structure in the language, and that operations on source code are implemented by APIs that are exposed as standard parts of the language.
As far as I'm concerned, if a language does that, then it's nailed this particular part of being Lispy. There are some other parts, but that discussion is outside the scope of this one.
The fancy word is "homoiconicity" but that is more about appearance. Homostructural? Homotypic?
I think it's also important, at least in my opinion, that that data structure be extremely simple, and cons cells are about as simple as you can get. When you start adding "Well a vector is different than a string, n-tuple, or array, so your code has to figure out which one it is dealing with", that's when you run into issues. You could step back and just go object oriented, but at it's core an object is just a struct with a pointer to a function table and/or dictionary data, so we're right back at "cons cell if you squint".
Internally it may be sped up but conceptually "everything is a small integer value or a cons cell" maps pretty closely to how I think about low level data structures. something something build your own arbitrary precision floating point number...
There's a decent argument for keeping such a foundational data structure as simple as possible, but there's also a decent argument for not making it too simple.
Cons cells are certainly very simple. They're so simple that, as Moon once observed to me, there's no place on them to hang metadata. For example, if your source code is made of cons cells, you might wish that they had some sort of metadata slot so that you could use it to keep track of where a given hunk of source code came from. You can't though. You have to kludge up some out-of-band solution for things like that.
We were talking about my hobby Lisp, Bard. He liked that it separated protocol from representation, so you could have Lists that were made of something other than cons cells. In fact, in Bard your Lists can be made of anything you like, as long as it participates in the List protocol. In particular, they can be made of something that has some place to hang metadata.
Rich Hickey of course also gave a bunch of Clojure's data structures places to hang metadata, possibly for similar reasons.
Secret meta-data in a cons cell is not out of the question.
In TXR Lisp, cons cells are four pointer-sized fields wide. So one field is not used. Almost. The field is used in the hash table implementation in which entries are conses. It sticks the hash code in there. That hash code is a pointer sized word with no tag; the garbage collector can safely ignore it.
The extra field is currently not used for tracking source location information, though it could be. Source location info is instead tracked in an external hash table. (The table is configured with weak semantics, so when the code becomes garbage, the entries vaporize.)
That representation could change in the future. It would mean that when the garbage collector traverses conses, it has to look at that hidden field of each one. And each time we allocate a fresh cons cell, we have to make sure it is initialized.
I'd have to benchmark it.
Associating expressions with source location info is a cost that we bear only when processing source code. If we shoehorn it into conses, then there is some nonzero cost to all cons cell processing, whether we are scanning code or not.
An important problem is that meta-data attached to cons cells (whether internal or external) is not copied across traditional tree-structure rewriting operations.
TXR Lisp's expander does some work behind the the scenes to propagate location info, like from macro calls to their expansions. The parser has a flag for whether to attach the info to objects in the first place. It's on by default if we are reading code, but not when reading data.
Outside of the expander, a few places in the compiler have to be aware of this (when the compiler performs its own tree-writing outside of the macro framework).
Overall I'm satisfied with the reporting. From time to time I see a bug: an error occurs for which source location info isn't available but should be.
I didn't give it character precision: I think that compiler messages that report line number and character column are too rococo for my taste. If you can't figure out the problem from a line number, maybe your code is stuffing too much into one line of code.
Associating expressions with source location info is a cost that we bear only when processing source code. If we shoehorn it into conses, then there is some nonzero cost to all cons cell processing, whether we are scanning code or not.
That's true only because cons cells have a specific representation, but they don't have to. Bard classes are defined by protocols, not representations.
If I remember right, that's what Moon liked: because Bard's classes were defined by protocol and not representation, source code could be made of lists, and could have a place to hang metadata, without imposing that cost on other lists, because lists were not any specific representation; they were just any representation for which the list protocol was defined.
Even though I have two kinds of cons cells (regular and lazy) as well as the ability of objects to implement car and cdr and then work with those functions, I'm still fairly reluctant.
I wouldn't want source code to use objects, but real cons cells. Objects are heavier-weight. Each object is a cons-cell sized object, plus something in dynamic heap.
There are print-read consistency issues. Lazy conses and regular conses are indistinguishable in print. If you print some lazy conses, and read that back, you get regular conses. Of course, an infinite list made using lazy conses will not print in a readable way, so we can sweep that under the rug.
Objects implementing car and cdr have arbitrary print methods too. They won't print as lists. Those programmed to print as lists won't have print-read consistency.
Point taken, but I feel like I should explain that the word "class" has an idiosyncratic meaning in Bard.
Bard classes are not conventional object-oriented classes; they aren't even CLOS-style classes. A Bard class is a set of representations that participate in a given protocol. That being the case, a hypothetical Bard cons cell representation could be exactly the same as a TXR Lisp cons cell, or the same as a Common Lisp cons cell. In either case it need not be the only representation of a cons cell in the language.
(I feel like someone is going to object that I shouldn't use the word "class" for a concept that is so different from what it usually means in object-oriented languages, and that might be true. If someone suggests a better term for a set of representations defined by a protocol in which they all participate, I'll consider adopting it.)
I have experience with both: multiple deeply-integrated cons objects that satisfy the consp function, as well as allowing non-cons objects (including classes in the OOP sense) to take operations like car and cdr.
Once you merely go from one cons type to two, with their own tags, every place in the run-time which checks for a cons cell has to now check for two possible type tags. An atom is everything that is not a cons, as you know, so that function is also affected.
(I wonder whether it wouldn't be better to just have one cons tag, and use some flag field to distinguish lazy conses.)
Also fair points. In Bard I’ve been willing to pay that cost because exploring types that are defined by protocol was one of the motivating reasons for working on it.
I did not intend to classify Clojure as "not a Lisp". I didn't intend to comment on Clojure at all, specifically, so commenting on this particular thread was an ill-conceived choice on my part.
I wanted to describe the source-code peculiarity because it hadn't been discussed elsewhere in the comments and I think it's important in what you might call old-fashioned Lispiness. I should have chosen another place for my comment. Sorry about that.
It's a slight cultural shift, rich hickey probably tried to modernize/homogenize things sensibly, adding a few literals for vectors, maps and sets (a very interesting idea ergonomics wise, i'm 80% for it personally, having easy data notation is such a bliss), and some underlying changes (immutable DS first) which makes clojure feel different than lisps/CL/scheme.
My understanding is that Clojure is meant to be Scheme like, but it is not fully compliant to a Scheme spec, my only guess is due to JVM specific nuances, but I could be wrong. It has been some time since I last dived into Clojure specifics. I will say though, that side from Racket, I think Clojure is top notch, although it seems a lot of my favorite projects from a few years ago have been abandoned. One of my favorite things with Clojure is using the REPL to build a GUI using JVM libraries.
>> My understanding is that Clojure is meant to be Scheme like, but it is not fully compliant to a Scheme spec, my only guess is due to JVM specific nuances, but I could be wrong.
Clojure's syntax and semantics are quite different from Scheme and Common Lisp:
Most differences are due to design choices, not merely JVM nuances. A few differences are due to JVM limitations at the time that Clojure was designed.
I don't think any attempt was made to comply with a Scheme spec.
> I’ve seen people saying that but never with an argument as to why. What do they mean?
I don't know but if I had to guess, it's because lisp is list processing language, and Clojure doesn't really support lists (I mean, it's possible to make some, but there are none out of the box); instead it has a variety of trees that mimic the runtime performance of lists, arrays, hashes, etc.
I’m not sure I understand how there are no lists out of the box in Clojure. What makes the data structures not valid lists? Basic lisp lists are nestable, doesn’t that make them trees? The underlying structure is to support immutability by default but that’s under the hood stuff. Conceptually and, I think more importantly, syntactically they’re list.
When lispers talk about lists, they are usually speaking about a very specific type of data structure - a linked list of cons cells[0]. Clojure's lists are not actual chains of cons, they are immutable hash array map tries. This means that Common Lisp code can't interoperate with Clojure code.
The precise/pedantic lisper may insist that since Lisp stands for List Processing and Lists are chains of cons cells and since Clojure doesn't have cons cells for the built-in lists, then Clojure is not a Lisp.
If, however, you view lisp (lower case) as a family of homo-iconic languages that use s-expressions, then Clojure happily fits under that umbrella.
The trick is to pay close attention to whether the person is talking about a Lisp (ANSI Common Lisp implementation) or a lisp (the family). Sometimes people say "lisp" when they are talking about "Lisp", which can cause some confusion.
They are written in Java, and implement a bunch of interfaces, so the implementation looks complicated, but they are basically just classes with _first and _rest fields.
AFAIU, "list" has a specific technical meaning when used by someone complaining about Clojure not having them. It has to do with the implementation details, not just the semantics of how they're used.
Clojure has "sequences" or "seqs", which are semantically the closest to what lispers mean by a "list"—but also "vectors," denoted by square brackets, "maps," denoted by curly brackets, and "sets", denoted by curly brackets prefixed with a hash sign. Having these as core data structures violates the Lisp principle of "everything is just a list," and they introduce something other than round parens into the syntax, which looks really weird to those practiced with traditional/conventional Lisps.
For those well-practiced in "true" Lisps, Clojure reads like a very strange hybrid (one might say "corruption") of Lisp and JSON.
Clojure uses a high-level data-structure for list like data. Lisp usually uses a very primitive data-structure (two-slot linked list cells).
The main point is that Lisp actually has decades old established set of data structures, their operators and programming concepts. For example when I would have read a Lisp book, I would find explanations of these core datastructures and operators. In a Clojure book many things work sufficiently differently and also are named differently. In Lisp for example "atom" is a central concept. It's anything which is not a linked list element. In Clojure this word is used to name a completely different concept: a data structure to manage state.
Clojure is more like Lisp, Java, SML put into a blender. The result may have a similar color, some traces of flavor, but other then that it is a new language.
I'm probably the wrong person to have this conversation with - I agree with you for the most part, and I think Clojure is a lisp.
My guess was the only thing that I could really think of. That and the syntax support and integration of non-list datatypes into the core language (e.g. function syntax).
I'm sure it matters for purist (valid concern), language designers and "lisp-power-users (sic ?)"
I do feel however like that statement is kind a like a "tomato is a fruit" statement. Technically true but for the vast (again not all) amount of tomato users does it really matter ?