Do any lisps support the interactive style of programming CL is famed for, apart from CL itself? I think resumable exceptions are an important part of it.
To a large extent the famed CL experience comes from working with an image, so you get something similar in Pharo and Factor.
Personally, in practice, I rarely feel that resumable exceptions are important. Typically I have a data literal as a kind of mock and push that through the function or function chain I'm tinkering with in the REPL until I'm satisfied with the result.
Might be that I spend a lot of time in PHP/Psysh, Elixir, Picolisp and so on rather than good old CL, maybe I'd have another opinion if I had ever been payed for CL work.
Resumable exceptions are really cool though: on error get the interactive debugger, go to the bug, recompile the function, come back to the debugger, resume exception, see execution pass. Another CL thing that speeds up development, is more fun, and is extremely useful when working with long scripts (but for common tasks too, really).
(also kuddos to anyone using Factor in production^^)
Sure, and I play around with SBCL sometimes since it's a rather nice REPL experience. It's even nicer to have a GUI though, like Smalltalk:s tend to have.
In work settings I've mainly used Factor for data exploration but I can see it fit neatly as a data transformer or combiner in a pipeline from one set of API:s or protocols to some other, since it's relatively easy to express such things and one can get a performant binary to deploy.
I'd love to see some kind of shim to run Medley on top of SBCL, rather than its own VM.
If you are trying to attract people to your programming language in 2024, I'm afraid that Emacs is not tempting bait, no matter how good the integration. It appeals to very niche tastes.
1. Because any time I've seen a newbie ask about Common Lisp dev tools, they're told it's worth the time to learn Emacs and that Emacs with Slime is amazing.
2. I've been trying to learn Emacs on and off for about 25 years and I personally think it's horrible beyond belief. Vim is just as bad, but at least as someone who's been using vi for closer to 35 years now, I know how to open something, change a few characters and save it.
I'm not a programmer. I don't use code editors from choice. But I have done in the past and I have opinions about them.
It looks to me like you have a choice: a nice modern editor with poor integration, or a horrible crufty old editor which doesn't even know what the Alt key or windows are called but which has great integration.
IMHO, as expressed on HN before, Lisp ought to have a slick GUI system for newbies approaching it. The commercial ones do. For FOSS, the code is out there. It's a done thing, many decades ago. It just needs modernising.
It's hard to believe that you should not be able to open a file, edit and save it with GNU Emacs. I have a Mac, downloaded https://emacsformacosx.com , started it, command-o opens a file dialog, typed characters, command-s saved the file.
> For FOSS, the code is out there. It's a done thing, many decades ago. It just needs modernising.
That one is an ancient code base dating back to the 60s, written in a Lisp dialect roughly ten people in the world can program with some competence (I can't, can you?). It's a nice project, but it is far far away from what GNU Emacs provides. GNU Emacs has many many thousand man years more work and polishing in it, ongoing.
If you would learn a complex and large Lisp like Interlisp or Common Lisp, then one would better use a development environment for it. I doubt Medley is easier to use&learn than something like GNU Emacs + Slime + SBCL. For Common Lisp, Medley is the worse development environment, since very few code has developed for and with it. People use other tools to write Lisp code (vim, GNU Emacs, ...). People are just trying to get Medley to somewhat support the state of Common Lisp from 34 years ago (CLtL2). It has, like, zero support for Scheme or other programming languages (Python, Java, JavaScript, C++, Rust, ...).
GNU Emacs is not my favorite Lisp IDE, but it's by far the most practical solution for average developers: it does not cost anything, it has good Lisp support for several Lisp dialects and all kinds of other programming languages, it runs good enough on almost all current operating systems, hopefully respects my data privacy, ...
Sure one can use a new editor with less support for Lisp. I've also known Lisp developers using vi in a terminal. People have different preferences for their tools.
I learned about 20 different text editors before CUA came along. I refuse to learn any new ones unless they comply with CUA.
I write English, not code. Any one of dozens of CUA editors are perfectly fine and adequate for my needs.
> GNU Emacs was ported to window systems loooooooong time ago.
So its programmers have had decades to adapt it so that it conforms to the standards set in the 1980s and adopted by the entire computer industry, from Apple to Xerox.
If they can't be bothered to adapt it, I can't be bothered to learn their ugly pre-standardisation UI.
Some examples: a document is held in a window which can be split into panes to see more than one at a time. Those are the terms. Use them, or GTFO.
The keystrokes for file operations are always Ctrl + the first letter of the relevant English verb, e.g. Ctrl+O is Open, Ctrl+P is Print. The Control key is abbreviated Ctrl and nothing else.
The key next to Ctrl is called Alternate, abbreviated Alt. It is not called anything else on PC systems, although Macs call it Option, abbreviated Opt.
It is not called "Meta" and no hardware made in 40 years called it that. The software was ported to the hardware; now port its manual and its UI, or GTFO.
Incidentally, a second danger of using the wrong name is that because it's wrong, different teams use it for different things. The KDE team call the Super key "Meta".
Basically nothing of that applies to my main devices: Macs, iPads, iPhones.
None of those use CUA (-> the Apple UI guidelines for those are either older (Mac) or newer), keystroke for file applications don't use control, there are no win/super keys, there are no multiple documents using window "panes" (as you can see in any of Apple's applications, for example Apple Terminal uses Tabs, not panes), ...
It applies to Macs, yes, and more than you'd think. OS X adopted more CUA stuff than Classic had... whereas the design of CUA inherited a large amount from Classic MacOS and modified it both to add more key controls, and to avoid a look-and-feel lawsuit like the one that crippled PC GEM.
Command is Super. Came key, same scancode, etc. Just a different name.
Very broadly, if one remaps the bottom left keyboard key on a Mac to Cmd, and the Super key to Ctrl, then CUA control keys all Just Work™ on Mac OS X.
This is in fact how I normally use Macs when using a PC keyboard on them.
So, no, I disagree on all points. (I don't really care about the phones. I don't own them or use them.) I've spent considerable time over decades studying this and I disagree with every point here.
Classic is long dead. OS X does not exist anymore. It's now called macOS and the UI is merging with iOS and iPadOS user interfaces for some time now, because that's where many new users are coming from.
Even for Microsoft: many users will now use something like the Microsoft Office suite in a web browser.
Take the current "Apple Pages" text editor / layout program.
I've not used Emacs to edit a file since 1994 and I hadn't used it for very long back them.
But I seem to remember after these 30 years that was something simple, like Ctrl-X Ctrl-S to save, and Ctrl-X Ctrl-C to quit.
Edit: I don't have Emacs installed absolutely anywhere; but I went to a machine provided by the GCC Farm project and tried it there. I remembered right.
Good for you. In 1994 I was mainly supporting DOS, Windows for Workgroups, Novell Netware, and DEC VAX/VMS. There was no xNix of any form in the UK branch of the company, and I think the HQ mainly ran on much the same stuff with some added Classic MacOS.
The point being that I learned literally dozens of editors and UIs in the 1980s and early 1990s... and then CUA came along and swept it all away, and I never looked back.
I wrote much of the original Wikipedia article on CUA:
I never ran Emacs, never had any reason to learn Emacs, so I never did.
I did learn WordStar, WordStar 2000 (totally different), WordStar Express (totally different again), WordPerfect, MultiMate, DisplayWrite, MS Word 4.x and the different 5.x and the different 6.x, plus WinWord 1 and the different 2, and LocoScript, and The Last Word, and Edlin, and DOS EDIT, and many many more.
I am very much not unable to learn new editors. But for over 30 years now, I haven't had to. And that's wonderful and I am absolutely not learning another new different editor now, at over half a century old.
Any editor that wants me to try it must conform 100% to the CUA standard, or it can die in a fire. Weird editor keystrokes are 1970s/1980s stuff, and keeping them 40 years is inexcusable.
It seems like image-based interactive development is on a spectrum from "less alive" edit-compile-run, adding some interactivity with tools like gdb, continuing up through IDEs and REPLs, and up through Elm/Dart-style hot code reloading and BEAM-style runtime module reloading.
Maybe CL's flavor of image based development is just an example of good ergonomics, because the fundamental interactions (edit, compile, update image, test, repeat) are not really different from developing a C program inside a Docker container.
Julia has an excellent REPL, and Revise gives most of the "save code, you get your changes automatically" workflow one should expect from a good Lisp, with some minor caveats such as redefining struct types.
There's a new condition/restart package in the ecosystem, but this is currently of limited use because errors don't use it. However, one can use Rebugger to capture stacktraces and drop into an interactive debugger at the stack frame where the error is thrown (caveats mainly involve C 'builtins'), which is for REPL use the main advantage of condition/restart. I don't bother since I have InteractiveErrors installed, which lets me navigate to a stack frame in the error and open the editor at that line.
None of this is image-based, either, but on a level playing field the interactive programming experience Julia offers is exceptional.
That’s only true if the list of programming languages is infinite or if the list of features is infinite.
If you want the list to be expandable without ever reindexing AND you want it to still be relatively compact, you can use Rosenberg-Strong which creates a square shells instead of Cantor’s diagonal shell.
Bonus content: Szudzik shows how Rosenberg-Strong lets you create a bijection from positive to binary trees ordered in a way so that larger integers never yield a tree of height N until it’s exhausted all tree instances of height N-1.
So that was the joke, that there are infinitely many languages and features.
It's not necessary though, your first statement is plainly false as written.
I bet your link has some interesting algorithm in it. Luckily for Cantor he died before every having to participate in a software engineering interview.
Treating the searchable internet as a collective distributed table would provide for a list of all programming language features accessible via internet search.
It's lineage is in vyzo's personal standard library, so yes. The standard library is pretty good, going for a maximalist approach with even stuff like an S3 client [0] and websocket [1] support out of the box, among others (though these are far from comprehensive libs). The underlying runtime is all in R5RS/R7RS with a good collection of SRFIs in the stdlib. The tooling is ok, we're actively working on an LSP implementation and enhancing the documentation story. Also working on propagating type information at "compile"/expand time to give better performance safety.
If it's actually standard compliant any library written in a RxRS-compliant Scheme dialect would be available. The FFI seems reasonably sane as well. They list a lot of libraries in the reference but some are lacking documentation.
At a glance it seems you could do a lot in systems and network programming, but if you want to produce a lot of PDF/A or run a cryptocoin business maybe use something else?
I miss pg. Lisp commentary has never recovered from his departure. Ten hours with no comments about the project or a tangent about Lisp itself is a darn shame.
Why not write one myself? Drained. Lisp is near and dear to my heart, but midnight when I have to do our morning time routine with our 7mo at 6am isn’t the time to wax poetic.
The most worrisome thing is actually the most ironic: no one is posting any hate comments either. Back a decade ago, you’d get into long battles about whether it was a good or bad idea to choose any Lisp, or at least some jeers from the sidelines. But that, counterintuitively, was one of Lisp’s strengths. When I was younger, that kind of thing originally sparked my curiosity: if most people were jeering, and most people weren’t too smart, then could there be something to this whole Lisp trend (even if it seemed like more of a university project than a trend)? And it turns out there was.
Loved or hated, you’re noticed. The pit of indifference is a bad place for an ecosystem to be. It’s long past time someone write about it in… well, in a way that can only be described as pg style. He had a knack for making young devs hungry for more. https://paulgraham.com/avg.html
The essays still hold up. They’re as true today as they were then. But no one will believe them without an example. I wouldn’t have believed it if pg hadn’t released Arc, and sometimes it feels like I’m the only one in the world that actually uses it to solve problems that I personally have.
But the nice thing about Lisp is that it’s always there, waiting to be discovered. It’s arguably one of the few types of programming ecosystems that can be discovered rather than designed. It’s precisely why there are so many choices. And it also doesn’t matter that nobody else uses it, just that you like it, that it solves your problems, and you find it endlessly fascinating. Hopefully our generation won’t be the last, at least for a long while.
OK, I'll jeer: I'm a big fan of [McCarthy60], but everything-is-a-list implicitly introduces order, and the 21st century ought to explicitly accommodate the unordered as well. (there ought not be any is-more-basic-than relation between ordered and unordered)
I agree! That’s actually not a jeer, it’s one of my main criticisms of lisp. You don’t need lists to have lisp. In many respects it works better without them; https://github.com/sctb/lumen proves it, since hash tables and arrays are the fundamental data structure. They have to be, because that’s the only way lumen can run in JS or Lua.
Every time I can’t delete the first element of a list in lisp (I.e. del x[0] in the python sense) I get annoyed with racket.
The reason I look past it is because the benefits are so good that they outweigh the annoyances. I wouldn’t trade it away.
Try April (Array Programming Reimagined In Lisp)[0] then. Sorry, I am both an array language and Lisp fan for many years. You use Lisp for all of its goodness, and drop into APL to do numerics and array work in a language better suited for the problem at hand.
Hmm, maybe a marriage between Gerbil and J -> Jerbil!
I've been a fan of Gambit for a long time too. Love to see it being utilized this way.
Insisting on a single linked lists as the fundamental data structure is not necessary, that much I agree, but I would say, if you need del x[5], then in some cases you should not be using a list, but potentially a persistent data structure/PFD. This is an area that Lisps should invest in: Batteries included for good functional data structures.
This is impossible in traditional Lisp, and it’s the source of many of my frustrations. It’s why you can write an algorithm to destructively modify a list at any point — except index zero. The head of the list is the thing that’s used as a reference to the list. This doesn’t happen in Python; it’s why you can have empty arrays set to x and y, push a value onto x, and see it at y.
I think there is less confusion about this kind of thing with Clojure, where a preference for immutable persistent data structures is strongly stated in the language design. So the persistent nature of lists and other data structures is fore grounded, and there is no confusion over why the equivalent of "del x[0]" is not supported.
Pretty sure you can do this in CL but I'm short on time so I'm not going to figure it out right now.
Most of the time I try to get rid of variable names so it seems somewhat strange to invent more than one pointing to the same data structure in RAM, but I'm a simple person that gets confused easily.
(defun vector-delete! (vector index)
"Delete the element at INDEX from the given VECTOR.
Destructively modifies the vector.
The vector must have a fill-pointer.
This is not thread-safe, use at your own risk.
Examples:
(let* ((x (make-array 4 :fill-pointer 4 :initial-contents '(1 2 3 4)))
(y x))
(vector-delete! x 0)
y)
;; => #(2 3 4)"
(declare (type (Integer 0 *) index)
(type Vector vector))
(assert (array-has-fill-pointer-p vector))
(let ((l (length vector)))
(assert (< index l))
(let ((removed-element (aref vector index)))
(loop :for i :from (1+ index) :below l
:do
(setf (aref vector (1- i))
(aref vector i)))
(decf (fill-pointer vector))
removed-element)))
(Granted, it could be done in 6 lines of code, but I'm already cringing at not providing a continuation when calling it with an out-of-bounds index)
This is actually very good insight that makes me pause and ponder about its implications. Do you know any in depth research/paper on this topic?
Ordered data types are core to computers actually: arrays, heck even the memory itself. I always thought the list to be the most fundamental construct, but outside the neat world of computer, the real world is mostly unordered. Order is artificial, is indeed often not a given.
> Ordered data types are core to computers actually: arrays, heck even the memory itself
Has this been actually true in the last, say, two decades? Or is just a thin veneer to make your highly concurrent machine look like a PDP-11? Right now order seems to be somewhat important only when it comes to cache optimization: if you stick thing that belong together close to each other you end up with fewer cache misses.
I mean, within 4k (or 1MB I guess if bigpages) pages, sure. But my understanding is that the OS & VMM & TLB aren't going to guarantee anything about physical address continuity -- even if the virtual addresses are contiguous -- are they?
I've used other Lisps, but Arc for me fits a very enjoyable programming space. There's a small enough core that I don't have to spend a lot of time figuring out, say, how to change logging levels or why Spring isn't parsing a POST body into the object it's supposed to be. But it's also large enough that I don't need to rewrite an http handler.
I've tried to keep it Arc-compatible, but I always run it on Anarki.
Run it like `(gensite "path/to/site/base")`. The site root is a folder with the following things in it:
1. a config file `conf.arc`, with config for the site. It has these keys (but I don't think they're all required):
1. sitename -- title for the homepage
2. site root -- url; mine is "https://zck.org"
3. description -- used in the RSS feed.
4. slogan -- used in the sidebar
5. navbar-entries-at-top -- extra links at the top, as a hashtable. Mine is ((tagged table ((href "art") (content "my art"))))
6. mailing-list-cta -- A thing put at the bottom of each page (except the homepage) that is a call-to-action for my mailing list.
2. a folder called `published`. This contains the entries in the site. Each file is the html contents of the body of the entry, and I don't believe it matters what you call the file. Each entry also has a bonus set of options in it, which I'll explain later.
3. a folder `pregenerated` that has things in it copied verbatim over to the output. I use it for things like my favicon, css, images, and some html games (https://zck.org/numberdle/?variant=rationerdle).
4. a folder `frontpage`. This contains entries that do not directly result in a page; they merely result in entries placed on the frontpage and rss feed. They have the same options that the published entries do. I use it for things like announcing when I've given a talk, or launched something that's in `pregenerated`. This can be empty; it just won't generate any entries like this.
5. a folder `codegen`. This folder contains files that contain arc code. Each file, when executed, results in a list of pages. There's some busy work in passing args into it, and getting the results back. Don't start by using this; like `frontpage`, it can be empty, or maybe even nonexistent.
Now, the important part of each page entry -- the "options". This is a serialized arc obj that is put at the top of each entry. It has keys `url-slug`, `title`, `date` (in YYYY-MM-DD format), `navbar` (set it to t to put this entry in the navbar), `frontpage` (the string that gets put in the frontpage for this entry; an <a> link with no href gets set to link to the entry), and `tags` (a list of strings that tag that entry; see https://zck.org/tags).
((url-slug ruby-hashmap-syntax)
(title "til: Ruby's hashmap syntax")
(date "2024-03-20")
(navbar t)
(frontpage "Ruby's hash syntax is confusing. There are two different syntaxes, and they <a>work surprisingly differently</a>.")
(tags ("til" "ruby")))
<div>
<p>I was writing some Ruby for the first time, and I made a hashmap with some data in it.</p>
```rest of file snipped
The arc hashtable is not printed as part of the html output; it's only used for its values, and the page itself starts after it. In this case, the page starts at the <div> tag.
I write my blog in Emacs org files, for which I've also written my own org-mode exporter (https://hg.sr.ht/~zck/ox-zhtml). I've done this because the built-in html exporter creates some really ugly html. You shouldn't be able to tell what html exporter created your html; the org one puts in a lot of "org" properties that are terrible.
I'd love to know if you get this working! Feel free to email if you want to talk off-thread or get more help. I'm sure there is more I missed.
Gerbil scheme is a variant of Scheme implemented on Gambit-C. It supports current R*RS standards and common SRFIs and has a state of the art macro and module system inspired by Racket.
Seeing it has objects, methods structs, optional type annotation, couroutines, actor model, I tried to build it but the Gambit Scheme on my system was too old and I gave up.
Also an older but still useful overview of all the Schemes when used for scientific computing:
> Seeing it has objects, methods structs, optional type annotation, couroutines, actor model, I tried to build it but the Gambit Scheme on my system was too old and I gave up.
Compiling a newer Gambit from source is really straightforward. I ran compiles of Gambit over and over to stress-test Raspberry Pis to see if they needed active cooling (they did).
I tried it two years ago when the github cloned version required a specific version of Gambit that I did not have. If the current distribution comes pegged with its own version of Gambit like Racket does with Chez, that's geat.
The biggest gripe I have with every "new" language, be it a lisp, c-derivative or some variant of javascript is that I rarely see a comparison side by side of something useful on any of the first 3 pages of the documentation.
I see a hello world. I see some obscure (to me) syntax. I see talk about how this will solve world peace.
But never "here is http.c and here is http.$newlange" side by side, this is how they perform, these are libraries and these are language functions.
I think I have a similar reaction to you, but maybe with a bit of a different spin. For me, it's not so much that I want to see a literal side-by-side of something non-trivial (though that would be nice, too).
I personally want to see a non-marketing-bullshit statement of why other languages suck and why the author(s) feel that this new language doesn't (at least for some given domain or style). I obviously don't want the statement to be insulting or anything, but give me something real. Every new language claims to be "safe, ergonomic, performant, easy-to-learn, beautiful, scalable, cloud-integrated, AI-powered, "heart" open source, etc." I want a language that says, "Hey, other languages get X wrong and we don't. If you don't like it, that's great- go back to those other languages." where "X" could be error handling, performance, type systems, whatever.
Sadly, most LISP and Scheme implementations spend fourteen pages of README material explaining why they reinvented a particular wheel instead of, you know, putting a simple trolley together. HTTP implementations in most LISP/Schemes are particularly bad/abstruse, to the point where I just give up if I can’t do a simple HTTPS request with bearer auth and a JSON payload inside of 30 minutes.
That is short-term thinking for both linux and lisp. In the long-term, we need more experimentation, more creativity, and more risk-taking. Always. And what better place for that than on HN (at least for discussion purposes).
How long is "long-term"? 10 years? 20 years? Lisp has been around for over 50 years. How long do we still have to experiment with Lisp before we're happy we've found the "ideal" Lisp?
Carpenters are still experimenting with hammers and other tools. Carpentry has been around a little longer than Lisp (except of course the Ur-Lisp the gods used to create the universe).
There is always a need for both short-term and long-term thinking. If I am running a company, I am thinking short-term (at least when it comes to tooling). If I am dreaming or doing cutting edge research, I am thinking long-term.
I think we are still just babies in the history of programming. 50 years is nothing.
I think we'll be led by a programmer from another language who comes and unites the tribes against the languages that now dominate, a voice from the outer world. The "Lispan al-Gaib", if you will.
Interestingly I've had a similar experience. I used Chicken Scheme for a number of years after programming in Tcl for a long time. Tcl always seemed to be a kind of Lisp, an impression that only increased over time. Eventually I resumed using Tcl when I realized the language had been steadily improved over several years. With Tcl 9.0 on the horizon I see no reason to look elsewhere for accomplishing most programming tasks.
Tcl offers metaprogramming capability that rivals Lisp's, but they way it goes about doing it is... kinda nuts. I don't particularly care to stare into that yawning abyss for too long, so it sits well behind Scheme in terms of languages I'd reach for.
Tcl/Tk is still an unrivalled way to prototype a GUI -- and with Snit megawidgets, it's almost like programming a modern JavaScript component framework like React.
Yes, you're right, Tcl suffers from lack of Scheme-like macros. While a lot can be done using procedures, methods and introspection "tricks", often enough it's far from elegant. However I haven't been stymied all that often. Missing functionality can sometimes be provided by C-API extensions which I've gotten pretty good at writing.
Completely agree about the utility of Tk. I remember reading somewhere that HTML/CSS widgets were influenced by Tk. Don't know how true that is but I can see how it could be.
Fun thing is that Tcl was relegated to the role of gateway drug to Common Lisp, in my case. I still use it when I want a more modern stdlib or its well-integrated event loop.
I think the wording of the article title helps push people to this conclusion. My first thought was "we're 24 years into the 21st century, I'm sure we already have one of those".
Well, choice has its charms, but a non fragmented experience, with the wider combined adoption, and all the now scattered resources devoted to it, has more actual benefits.
There's newLISP http://www.newlisp.org/ which takes a fresh look at lisp and has plenty of libraries.
Lately, I'm going back to what I used years ago: Common Lisp. There are great books describing it, and implementations like SBCL https://www.sbcl.org/ are solid.
p.s. Plenty of time to come with the lisp for the 22nd century!
Newlisp actually revives ancient mistakes in Lisp history. For example, there is no lexical scoping -- and "contexts" are not a substitute for lexical scoping. Lexical scoping ensures that bindings introduced within a scope cannot leak outside the text of that scope, avoiding subtle bugs that may manifest with dynamic scoping. Hence, Newlisp's lambda isn't really lambda, like Lisp 1.5's but very unlike Common Lisp or Scheme (absent side effects).
Newlisp's default (and most used) FFI is also... dangerous. You simply import a symbol from a shared lib and it gets bound to a procedure corresponding to a C call to that shared lib. No specifying of parameter types -- and most platforms do not encode parameter type information in shared libraries in a standard universal way. (Things like C++ name mangling, and COM, only apply to libraries written within their respective ecosystems.) But don't worry. Newlisp trusts you to get the parameter types right, because Newlisp is for the practical Lisp programmer. Woe betide you if you don't, though!
Guile's (system foreign) module provides a similarly dangerous FFI, but at least you can (and must) specify parameter and return types when you import a foreign procedure from a shared lib that way, allowing for checks for parameter correctness at the call site, if not the import site.
I know it's a bit of an oddball in these discussions, but I find that I still have the most fun with Clojure. It's fast enough, relatively easy to get anything I want done, with proper functional data structures built in, with the entire JVM ecosystem to play with if I need it.
I got pretty into Chicken Scheme a few years ago, and I did a quick breakout clone in Racket as well, and I think they're pretty cool, but Clojure is the only one that has ever evolved past "toy" for me.
Keep in mind, I'm almost always in the engineering world more than the pure CS world, and yeah, it's often pretty useful.
The Apache tools are the biggest things for me; I do a lot of work with Apache Kafka, so being able to directly use the first-party libraries like Kafka Streams is useful. If I need to do any kind of distributed processing stuff, I have Apache Spark or Apache Flink when I need it. It's kind of falling out of favor now, but I still occasionally have a need for Apache Zookeeper as well.
Now, I'm sure that there's Clojure-first versions of these things, but the sad truth of the software landscape for me is that a lot of it is still powered by Java. I can either be tasked with reinventing a lot of the infrastructure myself by using a language that doesn't have good Java interop, or I can use a JVM language. I think that Clojure is the least-bad of the latter category.
In some ways, Clojure is an even better Java than Java; it's easy to compose together arbitrary java methods, without any kind of fancy fluent interface or anything, for example, and there's lots of helper macros that I think really do smooth over the Java-ness of certain interfaces.
This is how I feel about Kawa Scheme. It has very good Java interop without all the... Clojurisms which are like pebbles in my shoe while I'm working. It's what I'd reach for if I wanted to author, say, a large web service from scratch.
Clojure was the first Lisp I learned, and I started using specifically for its concurrency support, not the fact that it was Lisp. I later grew to love Lisp and played with other ones.
I bring this up, because I am curious which “Clojurisms” annoy you? I probably didn’t notice them because I wasn’t used to other lisps when learning it.
Common Lisp and Scheme are built on s-expressions which are lists of symbols and other atoms. All of the core syntactic constructs are based on this. Clojure decided to add in other syntax for things like arrays and maps. Which, okay, fine, other Lisps have syntax for those things too but Clojure built syntactic constructs out of these and that just messes with my Lisper brain. The way let, fn, and defn work... ugh.
A lot of Clojure people might be like "I don't see anything wrong with it, in fact I prefer things the Clojure way." Whatever. You do you, man. I just can't with that. Kawa Scheme for me every time if I want to Lisp with Java libraries. Second place, ABCL.
As someone that learned Lisp in the 1990's, starting with The Little LISPer, all the way to classics like The Art of the Metaobject Protocol, dusty digital copies of Xerox and Genera manuals, with a little XEmacs Elisp on the side, I am perfectly fine with Clojure's design decisions, and is also the only Lisp like language I still reach for.
The discussion about parenthesis is a bit ridiculous, but unfortunely their visual distribution does matter, and Clojure is more appealing to folks without Lisp/Scheme background.
For the records Apache Kafka defines a fairly sophisticated write protocol: request and responses, message formats etc. This is the only way any client interacts with a Kafka broker. The Kafka Java library uses that, and so is librdkafka (offering a low level C API). The latter is used by non JVM based clients -- Python Common Lisp: via FFI.
Yeah, I knew all that, I've actually used librdkafka.
If my goal were as simple "read and write and commit to Kafka", the vanilla clients like rdkafka would be fine. However, most of my personal projects for the last year have made pretty liberal use of the Kafka Streams library, which as far as I'm aware only exists in JVM land.
All these toy lisps aren't really advancing the state of the art. It's dog bites man at this point, there really isn't much new to talk about.
There's nothing wrong with that, but , oh, look, a new scheme, yea. None of the previous 70 changed my life in any meaningful way.
Functional programming is easily available in almost any modern lang, virtually all of which have better tooling (and no, I have zero interest in using emacs) and ecosystems.
It seems some people really don't like you not liking new Lisp implementations, even though nothing you said is wrong: there are already lot of Lisps, the non-toy ones have roughly the same performance characteristics across the platforms they run on, and most non-Lispers don't care about them.
I think some people may take umbrage at TylerE's statement of:
> virtually all of which have better tooling (and no, I have zero interest in using emacs) and ecosystems
Anyway, there are a lot of Lisp-like (for a given value of like) language implementations indeed, but then there are also a lot of recordings of various orchestras performing Beethoven's 6th Symphony.
Whereas those 10 languages that are not lisps are advancing syntax! Where there is syntax there's semantics, clearly. Therefore it follows that whenever there's advancement in syntax there's advancement in semantics.
Sorry, right. What I mean is that those 10 languages with new syntaxes can disguise that they're not actually new better than 10 lisps (can disguise the same). In the programming world at large most people are duped by syntax.
Someone making just a RNRS Scheme, with nothing new in it compared to other Schemes is at least honest.
What major real world systems that people actually use are implemented in scheme? Lisp programming environments don’t count. I’m talking general use software.
That post is from years ago, and even then almost all of it is past tense. (Used not use). The three things that I actually spot checked were form the 80s. Yeah, lots of companies were throwing money at lisp during the original AI boom, not news.
not sure why you are so convinced scheme is a useless toy language that you won't even bother looking for examples. but here is a list of active projects built in chicken: https://wiki.call-cc.org/Software