Every time Clojure comes up on HN there always seems to be a lot of negativity. I've come to flinch just seeing a mention of the language, which I do still love and have used for a decade, but perhaps this is how everyone feels about their pet language. I think there's a bit of a tension with Clojure, that it's a weird combination of being principled and being pragmatic, which means that there are always languages that on paper beat it out on any given axis. At the same time Clojure people do have a tendency to paper over the cracks quite a lot, at least some of which must be cognitive dissonance.
The biggest problem is that it's very hard to picture why, at the end of the day, all the choices that went into Clojure come together into a productive whole for building real-world software. It's a really nice mix of terseness without preventing clarity, simple lightweight modelling paradigms, interactive development, easy access to multiple cores, and all on top of the JVM with its enormous ecosystem. It's not as Lispy as other Lisps. It's not as pure as other functional languages. It doesn't have a fancy type system. It doesn't have native performance. But it gets stuff done and it does so fairly elegantly in most cases. I've found it a really solid career choice, there's really very little that you can't solve in a satisfying way. Plus whatever you think about parentheses, Clojure syntax basically hasn't changed in 10 years, most new features are just libraries of new functions and macros, and for me that validates that it's the correct approach.
I've never been a huge fan of lispy syntax, but I've never been a big hater, either. Personally, I find the square brackets and curly braces to be just enough additional texture to make the code plenty readable, as long as you're using a good autoformatter. (Letting people make their own decisions about code formatting in any lisp seems to be a very quick route to a very bad time.) What I find, though, which makes me sad, is that many people knee-jerk reject any non-ALGOL syntax (Lisp, but also Smalltalk, ML, Prolog, whatever). To the point that I fear that not being able to understand the syntax becomes a point of pride. It reminds me a bit of the sort of asymmetrical intelligibility of Metropolitan French and any of the Canadian dialects, a situation where I also get the impression that the difficulty in understanding is more than a little bit deliberate.
So there's that. But I also do find that Clojurists have not been immune to some of the snootiness that seems to infect most functional language communities. And I wonder if some of the reaction comes from that, combined with Clojure maybe bumping into the rest of the world more often due to its identity as a JVM language.
Yeah I do see this attitude and it’s sad, I guess it’s hard to create a new language on an existing runtime and not try to market it (and convince yourself) that it’s better than the alternatives, but no doubt it goes further than that. Which is a shame because again, I think Clojure is good because it’s fairly pragmatic. It feels to me much more a product of software engineering knowledge than computer science knowledge (despite fancy and useful theoretical stuff like transducers, which of course exist in many places now).
I'm inclined to agree. For me, Clojure feels like the one alternative JVM language that really offers a more-than-merely-incremental productivity boost over Java for a lot of everyday problems. (Scala also has its strong spots, but I find them to be fewer and further between.) And also, conveniently, it lets you really easily switch to Java for the bits that Java does better.
It's maybe the quintessential Cassandra language, though. As a dynamic functional language, Clojure is a great complement to Java. And, while I've also never really been comfortable with macro programming, compared to Java's accumulated decades' worth of XML and code generation and bytecode munging and and dependency graphs that have been sent through a wood chipper by annotations, s-expressions and macros strike me as a delightfully clean and maintanable-looking breath of fresh air. But all that stuff also makes it a bit of a tough sell among the kinds of folks who haven't long since abandoned Java for a language like Ruby or Python. A bit like trying to sell tie-dyed shirts out of a VW bus at a Barbara Streisand concert.
Your last point is very well put. Clojure's selling point to an extent was "look, a Lisp they might actually let you put in production!" compared to Ruby's, which certainly in the early 2000s was "rip off your suit and join a startup".
Could you elaborate on what “ A bit like trying to sell tie-dyed shirts out of a VW bus at a Barbara Streisand concert.” means here? I’m not sure what the colloquial terms actually mean :) is it that Streisand concerts were fashionable and tie dyes weren’t?
Tie-dye T-shirts and VW buses are associated with the hippie subculture in the US. A group of people much more likely to listen to Bob Dylan or Creedence Clearwater Revival than to Streisand.
Likewise, the sort of people associated with Streisand concerts -- mainly well-to-do, cosmopolitan, white (especially, but not exclusively, Jewish) women -- would not trust a shaggy man with a rusting VW bus with their money, let alone be interested in his handmade wares when there are perfectly good boutiques on Fifth Avenue (or Rodeo Drive) to patronize if they are in need of clothing.
OP was trying to imply that the people who still use Java have fundamentally different tastes than dynamic-language programmers, and so evangelizing Clojure to them would be a fundamental mismatch in tastes, like a hippie trying to sell to upscale white women.
> At the same time Clojure people do have a tendency to paper over the cracks quite a lot, at least some of which must be cognitive dissonance.
An executive I previously worked with commented one day "Every small language's following is a little bit cult-y. Clojure's is more so than most."
Personally, I think the technology itself is fine. The ecosystem, from my exposure to it, has a few warts. It's somewhat small, which I expect of what is still a niche language. The killer issue for me was the memes that slowed and discouraged the development of useful frameworks in favor of gluing together ad-hoc sets of libraries.
Of course, it may have just been that I was working with a batch of Clojure devs who were in those jobs because they were sold on the glories of functional programming purity. Hard to say. They were better than the Haskell shop I worked in, but not as much so as I would have liked.
Can’t argue with this, I find it immensely frustrating. I’m on my fifth or sixth library for interacting with a database, my third component library, fourth or fifth config library, and the data science story especially has always been terrible because of the aversion to frameworks. This isn’t necessarily a great hardship, and for the most part old code just works, but so many libraries fall by the wayside or just never reach their potential cos they were the singular vision of one smart person, while all the other smart people were off working on other stuff.
It became a major headache for me when I saw multiple teams of developers trip over the same blind spots. Gluing together libraries works best when you're enough of a domain expert to understand every aspect.
When you're writing public-facing web services and forget about security while pulling together your stack, this ignorance can become very dangerous!
The problem with lisp is that most programmers find the syntax extremely off putting. I've been watching this debate go around and around in circles for 20 years now and nothing has changed. And as much as I appreciate the power of macros I feel like the features of other modern languages cover a lot of the same ground so the benefits of such a controversial syntax are diminished.
Lisp is a fascinating and very elegant language and every developer can learn something from it but I gave up waiting for the lisp revolution a while ago.
Lisp got a lot of things right and modern languages have borrowed a lot of ideas that were pioneered in various Lisps. But nobody has borrowed s-expression syntax and I don't think that's an accident.
As soon as a language borrows s-expressions, it's a Lisp. This makes it impossible in practice for a non-Lisp to just "borrow" s-expressions. Definitely not an accident.
On the other hand, you do have languages like XML which is basically a much more cumbersome syntax for s-expressions.
I don't think that's true -- we're discussing a language that was invented relatively recently and has sexpressions. It's just that if a language has lots of parentheses, it gets classified as a lisp.
The other problem with Lisp is that it attracts a fanatical element that rejects any criticsm of the language. Every language has its zealots but Lisp takes it a step further.
Google Erik Naggum for an extreme example of this syndrome.
>>The problem with lisp is that most programmers find the syntax extremely off putting.
For all practical purposes Lisp doesn't even have a syntax.
The real issue is with programmers who are at beginner and advanced beginner levels all life.
This problem existed with Perl too. Agreed that a lot of text manipulation tasks that existed before the data exchange standards like JSON and XML came around don't exist anymore. The use cases for Perl have reduced. But the overall point stands of course. People are willing to write 100 classes to extract a string from data, than learn to write 10 lines of regex code.
If the only thing you are willing to learn and use is a for loop, if statement and function/class syntax. Anything will look off putting.
Just see how many programmers struggle with concepts like concurrency, pointers, recursion etc. It's just that these people struggle to hold non-trivial concepts in mind.
There's an elitism inherent in this take that I think is misplaced. A lot of programmers are perfectly capable of weighing the pros and cons of s-expression syntax and deciding it's not for them. Too many Lisp programmers believe that Lisp way is the right way and not just a set of tradeoffs like every other language design.
>>A lot of programmers are perfectly capable of weighing the pros and cons of s-expression syntax and deciding it's not for them.
Most of the programmers today have negligible experience with Lisp.
It's also not really elitism. I'm not saying people are stupid. But people take great pain in avoiding reading documentation and understanding anything in general. Which is why I gave the Perl example.
Reading documentation for 30 mins could help you understand the issues and help you fix them at root, instead of hours of Googling. For some reason people do the latter, not the former.
I actually think your Perl and specifically regex example where spot on, in my career I have only worked with probably 4 programmers who really understood and could think in regex, which gave them the tools to solve extremely complicated parsing problems rather simply. The rest of the teams just relied on those people to do their regex thinking for them. The sad part, it regex it rather simple to learn and just takes a little effort but people treat it like it is black magic.
A lot of regex-based solutions are not robust against bad input.
Bad input doesn't always mean that the regexes don't match!
Regexes have a very happy home inside lexical analyzers for tokenizing languages. Lexers define what is correct input and match every case with some regex pattern. If there is no match, then the input doesn't contain a valid token: the lexer can loudly complain (logging an error that can be treated as fatal by the overall compile job), drop an input character and try matching again.
The typical regex solutions in Perl (Awk, ...) scripts go like this: "look for this flimsy, minimal regex somewhere in the stream and assume it's the right thing, then match this other regex in the same line and---woo hoo!--that's our item. Oh, false positives, schmozitives."
Basically, if we were to pin it down to a single difference in principles is that using regexes for searching for something small in something large is different from matching an entire input in its totality (and then getting at the desired parts).
Searching is the quick and dirty thing that's easy to reach for.
I have tried really hard to fall in love with LISP style languages. This includes implementing my own LISP interpreters/compilers just for fun. I also know enough Haskell (and Idris) to compare LISP style languages with the “opposite” end of the functional language spectrum. And I an the maintainer of a high performance functional programming language at work. In the end I still end up never reaching for LISP or Clojure when solving real world programming challenges. The key reason is that I am more productive with strong types. The same reason why I initially loved Ruby and then stopped using it the moment I tried to build anything bigger with it. I realise that not having types makes some people happy. That’s great. It just doesn’t make me happy.
Whoa, this was a point about syntax (non-Algol vs. Algol) being overstated. Haskell and Idris absolutely fall in non-Algol languages that make people do a double-take.
For the record I like lots of languages - dependent types are awesome. I find that the language I reach for will mostly be a function of the problem space.
Just an FYI, certain Lisps have static typing as well.
I know quite a few really brilliant programmers that were exposed to scheme or lisp in school but prefer other languages for various reasons in their professional work.
No offence but it sounds as if you are invoking a “True Scotsman” argument here? Also (using your argument) how do you know LISP is better than C++ when you haven’t programmed C++ “in anger”?
This clearly isn't true, though. Lisps do have a lot of syntax. In the Clojure samples in this blog post alone we have:
• Parens
• Curly braces
• Square brackets (why are they here, if Lisp is just s-exprs?)
• Colon-prefixed symbols
• Quote-prefixed lists
• Functions with names like >!! and ->>
That's pretty obviously syntax. You could argue it doesn't have keywords in the way C-type languages do, but magical functions that are defined as part of the language and should never be changed for all intents and purposes might as well be syntax too. So in the end the difference is kind of academic.
I'm not saying it doesn't have any symbols, and yes Clojure makes a pragmatic choice to include more primitives than just lists, I think people are pretty happy with that. But everything you mention was present in the very first release of Clojure (function names are just function names). The changes since then have been tiny (some reader stuff which you could easily ignore). My point is once you know how to read and navigate Clojure code you're never going to have to worry about new primitive stuff being added, because the syntax is enough for life. I like that, and dislike languages like C# where every release and every new language feature introduces new keywords. That's an entirely personal thing, I'm not saying it's better, but I do think it's to Clojure's credit that it doesn't have to add new syntax to get new things done.
So if its not an issue in C languages, how come this is an issue in Clojure?
The argument here is lisp has a simple rule. The first element of a list says that what is to be done, the remaining are its inputs. That's really all there is to it(mostly). Using () for everything was really a kind of overloading, which is why I guess they used different opening and closing characters for different contexts.
To be precise, those are just function names, not syntax. But even Clojure having a more complicated syntax than Common Lisp or Scheme is much simpler than Java, for example. Instead of f(x, y) it's (f x y), essentially.
When I mentally diff Clojure against other JVM-based Lisps -- mainly thinking of Kawa and ABCL here, which are dialects of long-established Lisps -- Clojure comes up different enough to be weird, but not sufficiently better than the alternatives for me to put up with the weirdness and choose it over them.
It's kind of like when I recently evaluated Visual Studio Code (for the (1+ n)th time) to use as my default editor. Everybody I know at work uses it, and for a couple years there Hackernews made a collective O-face every time a new version dropped from Microsoft. So it must be good, right? Well, it was different enough from Emacs to force me to get used to an entirely different workflow, but not enough better than Emacs (really, for my purposes, not better at all) to justify making that enormous transition. So back to Emacs I trudge. A reasonable person may make the same call about transitioning from Visual Studio Code to Emacs when they hear advocacy from Emacs users like me.
That's where Clojure is for me -- in this limbo of it sounds good but it's not really worth it for me, personally, to switch. Plus its community has a strong contingent of douchey advocates -- not quite at the level of say, Rust or Newlisp but getting up there -- who act like Rich Hickey was God's gift to programming and make me think "So that's what Smug Lisp Weenies sound like to the rest of the world." And that further cements me in the realm of "hard pass" w.r.t. Clojure. Call it an irrational, emotion-based response.
To be honest I don’t really believe any programming language is that big of a boost in the real world, outside of certain hard requirements for correctness
or performance. Ironically if I were to want to make another 10-20 year bet on a language it would be Rust so perhaps I’m drawn to douchebaggery, but I’m sorry that’s your experience of Clojure.
> who act like Rich Hickey was God's gift to programming
This is a typical cult pattern: the “esteemed leader” who is godlike perfect and can do no wrong. Having said that, in all fairness, Rich Hickey is a great presenter and I learned a lot about functional programming from watching his talks. I completely disagree with him on types but still learned a lot.
Emacs is slightly painful to use on Windows just because I don't want to have WSL2 just for an agreeable set of Unix tools, so I have tried to fall in love with VSCode for the times I find myself there and not in Linux. It is good, and will probably and deservedly dominate developer mindshare for a generation.
That said, there are a lot of fundamental differences that might be painful for someone transitioning from Emacs. First obviously is just keybindings, there are great packages to handle most of the basics but like with any new editor it's a faff to get everything right over time.
Second is that is has lots of special cases of windows, which are hard to manage. In Emacs everything is the same, just a buffer. Your shell, your database client, your code, all buffers. You decide how all this is laid out and have absolute control. The same navigation, manipulation and search functionality exists everywhere. When you get an autocompletion of some text, it works the same everywhere, and the same navigation and search functions work in those menus. In VSCode, you have some text editor windows, but also tab groups which behave subtly differently, and then the panel and sidebar which are both different. I don't know how to 100% stop these things popping up, and the terminal seems like the only one available, so they're a regular annoyance to someone who is used to owning every character of screen real estate. Extension developers don't create extensions from the point of view that users want to consume textual information like everything else. It's not bad or wrong, it's just that Emacs works at a lower level of abstraction. It would be entirely valid to point out that VSCode's level of abstraction enables a more vibrant extension ecosystem.
Third is that there are modes in Emacs that are much more mature than VSCode. Calva for Clojure REPLs is pretty weak compared to CIDER in Emacs. All the git modes are inferior to Magit, even the direct clones of Magit (which are good and tastefully done). All of these are very actively developed and will get better, they're just not there yet.
Overall Emacs has fewer primitives, but they apply everywhere to everything, and I find that a much more humane environment for the kind of work I do (it's why it's a nice environment for a Lisp, which has the same philosophy to code).
Every time Clojure comes up on HN there always seems to be a lot of negativity.
Frankly I'm just sick of hearing "Why you should use Clojure" over and over again. Posts about other languages center on some problem that language solved for them, or some problem they have with it. Clojure posts are always cold-call cheerleading about how good it is in a general sense. Tell me something cool you did with it. Tell me how it saved you a mind-blowing amount of work. I'm not interested in hearing for the Nth time how someone think it's an overall excellent choice.
At my previous job, I implemented a stream processing solution in Clojure, which superseded a Java one which never had quite worked (it was a pretty bad codebase, in fairness), added more complex functionality, and allowed me to drop 1/3rd of the code in that module (from 18KSLOC to 6KSLOC). More importantly, it worked as intended, and only a couple of negligible bugs made it to production, which for the complexity of the system, and the scarcity of QA we had, was surprisingly good, tbh.
To not go absolutely insane with Lisps, you need some kind of parentheses tool with your editor such as ParEdit. The up side is once you get used to your tool, you can do really cool things and navigate/change code in higher level ways. But the massive downside (in my experience), is convincing coworkers to adopt a Lisp is practically impossible. The language itself being so foreign, and when they ask about the parentheses bringing up a tool they need to learn on top of the language is a double whammy.
While I would agree that paredit in Emacs is fantastic, the real shock comes when you have to go back to editing code in those languages that have the weird arbitrary punctuation. I mean, your editor can't even properly manipulate those expressions most of the time. In order not to go absolutely insane when dealing with JavaScript, C++, Java, you need absolutely top-notch editor support, and even then you can't do everything that paredit does.
This gets even worse with languages where indentation matters (Python, and the horrible abomination that is YAML) — which aren't even auto-indentable, because the editor has no idea what you actually mean. I'm not sure if you can avoid going insane with those.
You can get quite close to "AST editing"-like experience if you just use the expand/shrink selection feature of a JetBrains IDE. It supports most mainstream languages.
You just select a syntactic unit with opt-up/down, then u either type over it, copy/cut/past or press left or right to go to the beginning or the end of the selection, hence using the selection as an intermediate step achieving AST-level navigation.
And of course you can teach Emacs to behave similarly, using the Expand Region (https://wikemacs.org/wiki/Expand_region) + some hacks, but I agree, that smartparens also solves this problem quite well.
UPDATE: also use avy-jump for emacs or acejump for intellij: https://plugins.jetbrains.com/plugin/7086-acejump
these in combination with expand/shrink-region operations are a significant productivity boost, which is easy to learn and teach.
This is a common argument, but I think your mileage may vary. I had a Clojure job for many years, and it was the only thing I wrote in that time. When I moved to a different job in a more conventional language, I also had that initial frustration from the syntax context switch in my head. But it was very short-lived. Now I find that there’s really no such frustration, it’s just a different way of writing code, going from either direction to the other requires some getting used to it.
I think you might have just got used to it, but it's definitely worse. My job involves consistent writing of 50% Clojure, 40% Java, 3% Kotlin, 3% Scala and another 3% of others (Python, Javascript, HTML, SQL, etc.). And so as I actively code in both Lisp and Algol syntaxes (and wtv Python is), I definitely always miss my structural navigation and editing features in the non-Lisp languages. So I'm guessing you kind of just forgot how nice it is.
Having said that, there's some disadvantages to the Lisp syntax as well, rightward drift due to constant nesting is real, and can make readability and even some edits a lot more annoying, whereas the flatter structure of other syntaxes doesn't have this problem as much. I still find its pros outweighs the cons personally, but your mileage may vary.
Nah, the benefits of structural editing can be nice, but so too are the benefits of a great line-by-line editing workflow ala vim, which makes one just as productive, regardless of language.
And I think that a job where you have to constantly switch between lisp and non-lisp styles would be a lot more frustrating than just using only one style and getting used to it, so I can see your pain there.
I find that the default movement commands in Emacs match Lisp syntax pretty well (although paredit is much better still, when quotes and commas are involved e.g.), but when I switch to C++, they're infuriating. Since I'm editing code written in a programming language and not text written in a natural language the forward-word command can often jump over an entire line worth of C++ tokens, and sometimes it jumps into the middle of a token. This mismatch is really tiresome. Vim commands usually hit closer to the "move by token" motivation, even though it's still just regular expressions. And so editing in it is a breeze (but then it has other disadvantages that deter me from it).
Overall, I don't think you need the full power of paredit when programming. For any given language, your editor movement commands understanding what tokens in this language look like will usually already be sweet and enough.
Oh ya, Emacs default movements aren't great for more line oriented syntaxes, even for s-expressions I have customized things a lot more to my liking and smartparens isn't part of default Emacs either. I don't think Emacs really expect you to stick to defaults, the whole editor is designed to be customized.
When I edit the non s-expressions languages I mentioned I do it in IntelliJ, sometimes with the vim bindings.
In my experience, it's not about being line-oriented. It's about being punctuation heavy or not. Lisp, except parens, for the most part doesn't have all that much punctuation (quotes and commas are rare). Now let's take this program in C++:
If my cursor is after "main", then M-f will move my cursor to after "std". It completely ignored "()" and "{". Another M-f moves my cursor to after "cout". Here it ignored "::". Another M-f moves the cursor to after "hello" ("<<" and the quote-mark ignored), another to after "world", and another to after "std" (quote-mark and "::" ignored) etc. It behaves similarly when hitting C-Backspace, where sometimes half a line disappears suddenly, because it was punctuation-heavy. I think once I stumbled upon code where it deleted several lines. On the other hand, when I have a variable like "big_number", then Emacs will happily jump into the middle of it. Even though it's an indivisible token in the eyes of the language's lexer. Of course there is a well-known hack of adding the "_" character to be recognised as a "word character". But it doesn't solve the issue really. The issue is that forward-word uses just one set of word-characters, instead of several sets of characters/regexes for different syntactic categories, which would enable jumping first to "cout" and then to "<<".
My experience is that I agree that a standard punctuation is easier to write, but it's harder to get a big picture from it than non-standard.
Anyway, it's way harder to adapt to "parenthesis before the function name vs parenthesis after the function name" and "semicolons after every statement vs. never use semicolons" than "punctuation is all parenthesis and commas (or newline and space as in normal Haskell) vs. punctuation uses the entire keyboard".
Meaningful vs meaningless indentation is also one of the things that don't give me any problems at all to adapt.
You can do indentation with the same semantic rigor as parentheses - the problem with Python is less that it’s whitespace-significant and more that it’s procedure-based rather than expression-based. Parenthesis might mitigate this somewhat to the human eye (at the cost of some clutter and compromised immediacy), but unlike a LISP the Python interpreter doesn’t really understand what an expression is.
By contrast, I never have these kinds of semantic issues in whitespace significant languages like F# - I make mistakes that I might make less often in Scheme, but the compiler usually sets me straight since it realizes a branch isn’t returning a value, etc.
In my view parentheses versus whitespace is really a judgment call based on how your eyes read the source code. The crucial thing is having something like s-expressions.
Paredit (smartparens, actually) works semi-decently in almost every language I’ve tried: there are quirks that have to be worked around, but splicing/slurping/etc. are a lot more convenient than the alternatives.
I agree with you and I really enjoy the flow you can get into with s-expressions. But in my experience this frustration is typically viewed as very different and foreign from the frustration more mainstream languages bring. I think one reason is the mainstream frustrations have a lot of similarities. Wrangling blocks in C like languages and even languages like Python and Ruby have a lot of similarities, where as in a lisp it's just completely different.
I am a bit puzzled reading this. I have programmed in C++ for 20+ years and (as far as I know) am still fairly sane. I also have never met or heard about anybody programming in C++ who lost their mind programming C++ ;)
Not really, it's not lisp's fault that you have to carefully count parens in modern JavaScript without the tools that make it easy in Lisp, same with nonsensical indentation in Python (miss one space and watch the program explode in worst possible moment).
What Lisp helps you is grokking actual operations of computing and, especially when all you have is a really dumbed down algol, opens you up to more programming methods and techniques. All of that happens on layer high above syntax.
>What Lisp helps you is grokking actual operations of computing
Computing in the theoretical sense, right? I've always heard the opposite of this claim: Lisp abstracts away how computation is done on actual hardware. Isn't that what the famous Alan Perlis quote was referring to?
I'd say both, though low-level is more specific to compiled Common Lisp implementations rather than general for all of lisp family.
Specifically, ANSI Common Lisp is equipped with a function called DISASSEMBLE, and on many implementations it will provide you not only with plain assembly dump, but also with comments regarding said assembly, for example a comment specifying "we're calling X here", "this is handler for single parameter, and this is for multiple parameters" (in case of optional arguments to function)
Yes, I'm a solid lisp fan, but anytime I have to edit lisp without paredit I die.
Now paredit has been around for decades (1991 ? I forgot) so it's not like it's rarity.
Now that said, a little bit of paredit-fu allows for some funky coding sessions.. you can swap sexps, move blocks up down the tree .. you can even process the sexp with some elisp code in emacs.. it's very very swift.
What annoys me is the people who never expanded their knowledge beside a few paradigm .. they'll stick with python only or js .. or cpp. They're stuck onto a few libs and syntax.. it's a pity.
The interesting thing is that Lisp syntax editing mechanics scale. There are more “moves” that you can do with a Lisp, especially if most your code is also functional.
My approach: learn 2-4 navigational moves first for a while. Then add more nav and editing moves at a later point.
As with everything: deliberate practice leads to mastery. At some point it becomes apparent that the weird syntax is a feature that has huge upsides (another one being macros for example).
ParEdit's nice, especially when combined with mark-sexpr (C-M-Spc) and narrow-to-region (C-x n n). Somewhere out there is a smattering of Elisp for gracefully handling recursive narrowing.
Thanks for being levelheaded in your assessment of Clojure. I am so tired of claims that language X is “better” based on a subjective selection of what “better” means. X might indeed be better for Y (that you care about) but not for Z (that I care about).
I really appreciate both Clojure and Haskell. I agree that
> nearly all of the claims made about Clojure here can be made about haskell
, but I'm not sure about `more strongly´.
## DX with Haskell (and ML friends) I miss in Clojure:
- Harder to do sound, up-front design with data types and module interface, where implementation becomes almost trivial after when the type signatures make sense
- Consistency checks from compiler / type checker
## DX with Clojure I miss in Elm/Haskell:
- REPL ergonomics, where every action I could think of makes sense as a REPL command, leading to very small incremental pieces.
- Excellent default data structures with literal representation and serialization (EDN)
I'd love to read a point-by-point discussion of the article sections comparing Haskell and Clojure, though that's much to ask for in the comments field.
I've done a non trivial amount of both Haskell and Clojure, and this comment really hits the nail on the head. Each has non-overlapping strengths that I miss when switching. However, I've lately found F# to be my nearly perfect blend of enough things that I regularly use that I mostly just use that or Haskell when I'm messing about with katas.
I've had bad luck with Haskell's laziness causing hard to debug performance issues. Also the lack of libraries really hurts when it comes to even basics like queue and database interactions. Many times there's a few libraries that are all half finished. Don't get me wrong, I really like Haskell a lot, I just find that F# brings a lot of well-thought out compromise between Haskell and the wealth of libraries in .NET for production code. If it was my money in my startup, I'd have no qualms betting on F# to get me to market as soon as possible. If it was my personal OSS project that I work on for personal edification and enrichment, I'd choose Haskell nearly every time.
If I had to pick one single advantage of F# over Haskell, it would be the general lack of partial functions. F# also does some really neat things with pattern matching. (See, for example, active patterns.)
The C#-style object model is arguably an advantage for certain purposes. I always feel guilty doing it, but object oriented domain modeling really does feel more natural for certain classes of problem.
On the downside, F# lacks higher kinded types, and you're absolutely on your own when it comes to making sure things that should be pure are actually pure.
Really nice insight, thank you! Clojure has always been a bit of a mystery to me, so this is interesting to read.
- REPL ergonomics, where every action I could think of makes sense as a REPL command, leading to very small incremental pieces.
This is the only point I would dispute. The GHC interpreter `ghci` is very powerful and offers a lot of the same upsides. Beyond this, the language server offers in-code evaluation with "-- >>> expression" syntax, which is a cool new step towards fast looping UX. Clojure is certainly great at this, but I'd say Haskell is not far behind.
I've only dabbled with GHCI. I've used it as a standalone REPL for trying out small things, the same way I'd use a Python or Javascript REPL. I haven't used the REPL /the/ developer interface to the program. In Clojure, I would (1) start a REPL server, (2) connect to it from my editor, and (3) send expressions to it. I didn't develop Haskell that way, though I think it was possible with Intero[1].
Within the Clojure community, there's a perception that the Clojure REPL is one of its strongest selling points[2].
Are you using the REPL actively when developing?
Edit: really curious about the "-- >>> expression " syntax! I might have to give Haskell another go.
Edit 2: Example of this interaction in practice with VSCode[3]
Have you tried out ghcid? It basically just runs ghci on your program every time you save, and gives an updated list of errors and warnings. Not interactive in the sense that you don't manually test your functions with it, but like 95% of debugging in Haskell is just fixing errors at compilation time. I find it to be a very nice developer experience. Just need a text editor and a terminal with ghcid open and you get immediate feedback as you program.
Haven't heard of it before, but this looks super interesting! Thanks for the recommendation. I really like the fact that my whole development workflow could be a text editor and a terminal.
I enjoyed developing Elm with TCR[1] a while back; also with an editor + a type checker (plus the revert part). I recompiled my whole source on each save; incremental recompilation should scale better.
This is definitely weaker than a SLIME-like REPL, but I use ghcid --warnings --test My.Module.Tests.runTest, where runTest is tasty's defaultMain, to get near-instantaneous test running on changes to files. (Since it's GHCi instead of GHC, waiting for a full compile is unnecessary, and it still has the necessary smarts about what needs reloading to avoid reloading the whole project every change.)
But AFAIK, GHCi doesn't have any state saving functionality (I don't think the dev environment is even integrated enough for any to work), and the little code edition available is entire function only. The Haskell REPL is entirely oriented into the "you write the code on the editor -> you evaluate it on the REPL" way (again, AFAIK, the GHCi manual is huge).
That said, yes, it's a really seamless cycle of you write your code on the editor -> you evaluate it on the REPL, and yes, Haskell requires doing that very little, most problems stop at the type checker. Haskell does really not afford interactive programming, but I wouldn't classify that as a problem.
Yeah, that makes sense. If the killer app is JVM interop, nothing but JVM based languages should even be on the table. The "why Clojure?" question just becomes "because Clojure is the best language on the JVM," which is not too interesting of a topic IMO.
Haskell has decent interop with C/C++ languages, but certainly nobody uses haskell because of that.
If the killer app is practical usage than Clojure clearly comes out on top.
The problem with function language adoption is people keep pushing Haskell likes it's anything more than at its essense a language exploration research project.
The fact that you'd have a comment that ignores a language because it wins by default because it practical is more proof that people evaluating languages are speaking two different... well languages.
Some are looking for what they feel have the coolest ideas, others are looking for languages with very cool ideas, much better than what they're using today, and can still be effective/ productive in. Haskell is cool if you want to think about or play with ideas. Clojure is cool ideas, and can still be productive in (ie full modern library support).
As well if you don't know, F# falls into that same bucked of cool ideas, better than your avg imperative language and can still be very productive due to language and .net library.
We're trending to a point where any non-systems language is just an exploratory language unless it's build on JVM or .Net.
Otherwise the produvtivity loss from lack of libraries is almost impossible to overcome from any possible produvtivity gains from the language.
> We're trending to a point where any non-systems language is just an exploratory language unless it's build on JVM or .Net.
I certainly hope this does not turn out to be the case.
Both the JVM and .NET impose a certain type system on all their client languages. Those languages have an option to embrace it, like F# or Kotlin have, or to struggle against it, like Scala has, but they don't have the option to truly pull free of it. Not without shutting out effortless interop with the rest of the platform, and, in doing so, undermining the whole purpose of being on those platforms in the first place. And, since most the interesting developments in programming languages center on type systems, that implies that huddling together on the Big Two bytecode VMs stifles a lot of really interesting innovation.
> since most the interesting developments in programming languages center on type systems, that implies that huddling together on the Big Two bytecode VMs stifles a lot of really interesting innovation.
Not disagreeing at all. Interop with a library means interop/conformance with its type system. Which means if you want languages with new / innovative type systems, someone will need to build a library system to resolve this.
Either an untyped library system as large as the .NET or Java library systems, or a library that has a trivial way to tack on type transformation of some form so that each language that interops with it can minimally add a type translation layer between the two. What that looks like, I'm not certain.
But as long as engineers need to be productive, they need a robust modern library system. No new lang will get adopted if the lang author also needs to build up a complete library system, so it must be a general component.
I'm hopeful that Rust will lead in a good direction.
A C-style ABI, by virtue of being the least common denominator, is probably the best bet for re-usability across paradigms. Higher level languages would want to write idiomatic façades, but they already habitually do that anyway, even on higher-level platforms like Java and .NET.
And I think that deterministic memory management is probably also a pretty important feature. You don't want your libraries all bringing their own clever ideas about object lifetimes and such. But you also need it to be very reliable; a real danger with inviting libraries written in C and C++ into your process space is that they are liable to corrupt your memory. Rust's affine type system seems like a big step in the right direction here.
Similar thoughts for the error model. I don't have any particular complaints about exceptions, except that you don't want to be bleeding them on an external API, because that ends up being another spot where languages can fail to mesh.
What's missing, though, is that there is no good cross-platform standard for libraries that work with the C ABI (neither in source nor binary form) for other languages to plug into. So that's where I get to thinking that Rust might be closer to (if not exactly at) the mark than C is.
I don't think that's really the case anymore. You have to separate three things:
1. Using the JVM for its capabilities, like the JITC and GC engines, which are very powerful.
2. Using the JVM for Java interop.
3. Using the JVM for generic cross-language interop.
The nice thing about the JVM especially now with Graal/Truffle is it lets you explore and choose almost any point on the spectrum between "we're totally alien and the JVM is just our runtime" and Kotlin-esque "we are basically Java v2 with perfect interop". Modern JVMs are capable of running languages as diverse as Haskell, JavaScript, Kotlin, Ruby, WebAssembly, LLVM bitcode etc. Obviously if you're coding in C or Rust and running it on the JVM via Sulong, your interop is going to be limited to creating objects and invoking simple functions on them. If you're in Ruby or Python your interop gets better: you can expect automatic translation of things like Java-world collections to a vaguely native-language like collection via the Polyglot interop layer. And if you're Kotlin then you don't even create your own collections at all, just use the JDK standard library.
The point is, you don't have to use the Java bytecode type system to interop with Java or other languages anymore. Graal has fixed that. Your language can have any arbitrary semantics and it will still be JIT compiled, GCd and it can still call into other languages. The closeness of the type system is now a choice you make that trades off ease of use of Java libraries vs whatever divergence you want to have.
> Modern JVMs are capable of running languages as diverse as Haskell, JavaScript, Kotlin, Ruby, WebAssembly, LLVM bitcode etc.
As long as you don't mind either poor performance, or paying Oracle a bunch of money for good performance.
What all the fancy marketing takes pains not to say directly is that GraalVM is shareware. The open source version is just a basic version with some teaser features to get you interested, and, in classic Oracle fashion, the price of the full version is, "If you have to ask, you can't afford it."
I have no real objection to that model for other products, like BerkeleyDB. But I wouldn't want to build an entire open source ecosystem on a foundation like that.
The open source version of Graal does not have poor performance, by any means. It's more or less as good as regular Java, or better for some languages, but it's not really worse except perhaps on a few specific microbenchmarks.
Moreover, GraalVM is open source. Please don't try and redefine basic terms, as otherwise by your logic Linux would be shareware because Red Hat sell a better version of it, Android would be shareware, Chromium would be shareware etc. Making an open source product and selling a better version that isn't is perfectly legit. If you don't believe that, how do you envision platforms be funded?
As for the pricing, it is or was on their online shop. You could buy it with a credit card. Seems they have a problem with their store right now, but I've seen public price lists in the past. It's expensive but not at all "if you have to ask you can't afford it".
From what I've seen, it's only the small benchmarks where the open source version of Graal is on par with regular Java, because their short run-time ensures that all the bits that give HotSpot its name don't have a chance to kick in.
I think my point stands regardless of any quibbling about terminology. I've got no objection to some sort of free-however-you-capitalize-it-but-with-paid-premium-features model in general. But baking such a business model into something as fundamental as a cross-platform ABI standard sets a really bad precedent. Having lived through the late '90s, that sort of thing immediately brings the phrase, "embrace, extend, extinguish," to mind.
I'm glad to hear they've started publishing a sticker price. My memory isn't what it used to be, but I believe that wasn't the case when I evaluated it.
Are you mixing up Graal the JIT compiler with Native Image the tool that spits out ahead-of-time compiled executables? When I say "Graal" I mean the JIT compiler that can run on regular HotSpot and is just an ordinary Java JITC implemented in Java.
You have to distinguish between technical standards and implementations.
JVM bytecode is a technical standard for a portable, cross platform, strongly typed ABI. Anyone can implement it based on the docs.
Truffle is an open source API with a clear specification for creating interpreters. Anyone can implement it based on the docs, although there's no reason to do so given its permissive licensing.
Beyond that it's all implementation: the Graal JIT recognises when it's compiling Truffle interpreters and does it in a fast way, but it doesn't have to and Truffle interpreters can run on any JVM including those that pre-date Truffle's own existence. They just won't be as fast. GraalVM EE is a for-pay version that does an even better job, well beyond state of the art, but the open source free version is no slouch and Scala code can go faster by e.g. 20% or more even with the open source version.
There is ALSO the native-image tool that compiles things to native code ahead of time. Many people use the word Graal to mean this, because it's the most eye-catching thing in the suite, but that's not correct. It's called either native-image or SubstrateVM. The open source version of this produces code which is slower than regular HotSpot runs but has no warmup time and starts instantly. The speed drop is due to losing profile guided and speculative optimisations, it's not a pricing issue. The EE version of native-image that you pay for has various other features like the ability to gather profiles using HotSpot and then use them when compiling to win back some of the speed, but not all of it - you can't really beat Graal JITC on HotSpot for peak performance. The EE version also has some other useful features for security and sandboxing.
In other words, the Java/Oracle guys seem to be doing exactly what you'd want to see: there are standards, specifications and then implementations - all clearly separated. There are open source implementations, and better ones with support contracts that fund the development of the whole shebang.
While all of this is true and interop can save you from being completely stuck in a problem on many occasions, the interop is not as seamless as many advertise, at least not in Clojure, don't know about F#. Many times in Clojure is easier to write two wrappers, one in Java (to make the Java lib access tolerable from Clojure lol) and the Clojure one. And I'm talking about ad-hoc wrappers for your use case/functionality needs, not general wrappers, which take much more time. Productivity goes down in this translation, you start to wonder why are you wasting all this time wrapping Java libs. Kotlin is much better in this area of course.
I think there are at least two cases where JVM interoperability is relevant. The first is when a company is already using the JVM and so knows how to deploy and monitor it; in that case, it’s easy to choose because it presumably imposes limited burdens on the company’s operations people. The second case is when you want access to some mature environment, so that all kinds of packages and services are available to support it, but don’t have a commitment to any one in particular yet, and Clojure qualifies because of the JVM; in that case, a non-JVM language like Python, PHP, or Ruby might also work.
It's not just killer JVM interop, it's killer all around interop. Clojure relies on interop more than any other major language, the language has inherent syntax around it and even its standard library is designed with it in mind. That makes it very easy to adapt over different runtimes, giving Clojure more reach than most. That's why you have Clojure JVM, Clojure CLR, Clojure Unity, ClojureScript, Clojerl (Clojure on BEAM), etc.
Yeah, I worked at a company where several Haskell projects crashed and burned because of interop issues with existing JVM systems, but the Clojure project I worked on did really well.
As someone who tried out GHCJS, then Purescript, then Elm, then Clojurescript:
- GHCJS and purescript are powerful, but the learning experience might be steep[1]
- Elm is an excellent entrypoint into ML programming in the browser. Solid story for new users, and a great standard library for interactive web applications.
- ClojureScript differs from Elm in that it embraces its host, with all its power and all its wrinkles. Writing Elm is mostly a smooth experience. Read the guide[2], then you can actually build a web app.
I've spent the most time with Elm. Other people might have different experiences.
The GHC compiles Haskell either directly to assembly or via LLVM. There is a GHCJS project, which works fine, but personally I've not used it. There's not much of a story in terms of frontend Haskell.
On the other hand, there are two "child" languages of Haskell for the job: Elm, which is a frontend-focused language, and PureScript, which compiles to JS and is designed for that use case.
I actually don't want to get into the argument over which is better, but dynamic and static types do change everything, also with regard to how those features in the list feel in a language.
Agreed! Asking which language is "better" is a fool's errand. But talking about specific upsides, especially in comparison with other languages, is a useful reductivist tool in determining how we can improve the story for each language.
Compared to Haskell, Clojure, like Elixir and Erlang and Scheme, is a much smaller and simpler language and is easier to learn especially for people without a theoretical CS / math background. This is due to Haskell's ambitious and powerful static type system.
Interop, REPL driven development, s-expressions, and great support for working with maps seem to be the ones that don't apply to Haskell from my quick glance.
I would agree with all of these points except "pattern matching". Yes several libraries implement it, but it is not built in and even the best libraries feel clunky compared to for instance the built in destructuring. Rich explicitly rejected pattern matching ala ML for the reasons provided here: https://gist.github.com/reborg/dc8b0c96c397a56668905e2767fd6...
I wish he gave more examples in that response because I'm generally confused what he's talking about. I'm familiar with Racket and F#, but not having used Clojure, I'm missing some context about the Clojure ways of doing things and examples of the problems he claims.
> I feel about them the way I do about switch statements - they're brittle and inextensible.
That is not the case in a language like F# or OCaml. I do note that F# was introduced only slightly before Clojure was, but pattern matching provides nicely extensible functions and are anything but brittle in those languages. Also, active patterns in F# allow one to extend the pattern matching functionality.
> The binding aspect is used to rename structure components because the language throws the names away, presuming the types are enough. Again, redundantly, all over the app, with plenty of opportunity for error.
I'm not sure what he means here. Again in a language like F#, names of the data aren't thrown away. They are pattern matched against, only being "thrown away" to do actual calculations. Nothing is ever lost where the data came from. For example:
type Shape =
| Circle r
| Square s
let area shape =
match shape with
| Circle r -> System.Math.PI * r * r
| Square s -> s * s
There's no confusion here. In fact, pattern matching in a language like F# allows one to completely remove the possibility of error. For example, this really shows off in parsing applications. Once your parsing function returns a type that can be pattern matched, it's extremely difficult to have an error in the pattern matching sections of code. These are typically the most robust parts of the application.
> I'd much rather use (defrecord Color [r g b a]) and end up with maps with named parts than do color of
real*real*real*real,
> and have to rename the parts in patterns matches everywhere (was it rgba or argb?)
I don't understand this either. In F#:
type Color = { R: float; G: float; B: float }
let colorFunction { R=r; G=g; B=b } = r * g * b
No names are thrown away. Also, the comment on rather using maps seems to assume the data type for every element of the data structure is the same. How do you just use maps when the underlying types of your record aren't the same?
> I feel about them the way I do about switch statements - they're brittle and inextensible.
What is meant by this statement is that pattern matching violates the open/closed principal. If you add a new type to switch on, you need to update all the pattern matching code in the whole application to account for the new type.
It’s one of the two sides of the “expression problem”[0] (the other being object oriented polymorphism).
Clojure’s approach to this is to use “multi methods” which is sort of a “pattern matching”/“strategy pattern”. You are free to add in a new implementation of the multi method without having to update existing code
I think the artificial shape example does not adequately show how variant types/pattern matching are actually used in languages like SML/F#/Haskell. It combines two different cases.
Suppose you have a type `colour` defined as `RGB(int, int, int) | HSL(int, int, int)` and then you add representation as CIE. Then having to update each match on a colour is absolutely a feature not a bug. If you miss some out then your code will be wrong.
On the other hand, suppose you have various ways of serialization (JSON/XML/s-expressions). In this case, it would probably be nice if you could add a way to serialize to e.g. protobufs without having to jump around your codebase and all its clients fixing type errors. But in most languages of the kind we're discussing, you can do! You just have to represent the different serialization methods in some way other than a variant type. For instance, in OCaml you could just use classes and inheritance (although in practice you probably wouldn't because the language provides nicer tools).
Thanks for the link to Clojure's polymorphism. I'll need to read through it later and in more detail.
Isn't the open/closed principle more of an OOP design concept? In a statically typed language like F#, I want and expect to be notified what functions I need to update when I add a new type constructor to an existing type. This isn't a problem and is welcomed. Just because one updates functions doesn't make them brittle or inextensible. By only adding a new pattern matching branch, one is able to extend a function without affecting the other branches. However, this is getting into the statically typed nature of F#.
I think that link explains the expression problem rather superficially. It says you just need to add a new class, but neglects to mention that that also entails adding the new method overrides. So simply saying the OOP way is easy and functional programming is difficult when adding a new type is not really accurate. Same thing for adding a function in the functional programming paradigm, because it neglects you need to add a branch for all types. In reality, OOP inheritance and functional pattern matching are simply transposes of each other, and I'd argue that one is not really necessarily better or worse than the other. They're simply different organizational methods of how the data and functions on the data are organized.
I think Rich's point may be specific to Java switch statements and their limitations. (I know he's familiar with MLs, but I would imagine his frame of reference is mostly colored by Java's control structures.) The idiomatic clojure for your color example would leverage names in the same way:
I always found core.match really nice: but, in general, I strongly prefer using multimethods for the sort of thing other languages use pattern-based dispatch for.
How about Clojure vs Scala? Anecdotally speaking, I've seen more Clojure than Scala at my company, both being incredibly niche (I've seen more Groovy than either to be honest).
If I want to get more into FP, is there any strong positives/negatives for either? I must say though that after using Racket for a bit, I am a fan of the parens. Makes expressions crystal clear.
I don't see a future for Scala. Since Java got lambdas and var, enough of the pain is gone that Scala ends up adding its own pain. And now there's Kotlin if you really want to avoid boilerplate and not deal with sbt.
Clojure is actually designed as a functional language and not as multi-paradigm as Scala.
Yeah. It was pretty clumsy before it got less clumsy - and it makes me sad because a lot of the graph-based databased technologies glomed onto it pretty early on - and I mostly like where they say they are going. And their base compile Suxxor and they messed up their community with the new compile Python3 style.
I guess Kotlin is cool and clean - which is good. And has no ecosystem - which is bad. But stapling JVM language together for ad-hoc purposes is what we have learned how to do, neh?
For my part, I'm probably going to go back to high performance renderers and embedded systems. Like the man said bad in the day: "You can all go to hell - I am going to Texas". (Unfortunately I have been there for 30 years since I learned that I hate it.
SBT is a disaster. I love scala as a language but hate sbt. Scala 3 looks promising but I seriously get PTSD thinking about scala. We have a lot of scala spark apps at the l financial company I work for and everyone gets scared touching it. Java has improved a lot since I last touched so I'm not sure people outside of academia will use it. For my PhD thesis I use scala without SBT and I love it.
I've used the build tools of almost all the major languages (JS/TS, Rust, Haskell, Kotlin, C etc) and I don't honestly see any which are particularly easy.
A lot of it is philosophical - for the reasons that I dislike Gradle, I dislike SBT, but SBT has the honor of also having historically (and thankfully since cleaned up) esoteric syntax. I simply don't want builds to be allowed to reach the complexity that sbt allows.
I would definitely agree with your point that most build tools aren't easy! I've used maven and grumbled about it, and I'm mostly sold on scripting my build in the same language I'm using anyway, but in my long time on the JVM, it's always been the gradle and sbt projects that end up with inscrutable and hard-to-follow build scripts.
The sbt REPL also regularly breaks existing build scripts by changing how args are passed/parsed, or even how terminal color support works, and I just get sad every time I see a new, unexpected error from one of our CI runs.
Y does this too isn't really an interesting argument that X doing it is reasonable, I don't think (and my response to your initial question is entirely happy to lump Gradle in with SBT on its "understand-ability" demerits).
For me, it's especially bad with the indirection, and the 3-dimensional cube of settings (I can never remember after all these years which one I want to set, the scope or the configuration or the task...). The complete mystery of how plugins affect the final build configuration.
I even went to a lot of effort to master SBT. I've read through the official docs numerous times, and Josh Suereth's Book https://www.manning.com/books/sbt-in-action, and I still feel like I don't grasp it very well. I'm just not smart enough for SBT.
I think a lot of it is probably just that sbt feels as deep as learning entire programming languages sometimes and that is not something I want in a build tool.
sbt is not perfect but new build tools are coming up lately for Scala (check out lihaoyi/mill and propensive/fury to name) plus people are working with bazel (but i guess that's more for monorepos/very big codebases. my experience (7 years and counting) with sbt is that it's very very good until you need to roll out your own tasks/plugins.
I use a variety of languages and always feel that Scala gets a bad-rap because it's a compromise language (though I feel the same about Rust and most people seem to like it).
If you're used to Haskell you'll get annoyed at the fact that the inference is only within a single statement and that polymorphic functions resolve the typeclasses in a particular way.
If you're used to Scheme, you'll get frustrated that the identifiers can't be created easily in its new macro system or you can't manipulate the scope-sets of the identifiers.
If you're used to Erlang you'll wonder why Akka can't reload functions easily.
If you're from Java you'll wonder why everyone is going on about profunctors, effects and finally tag-less.
Scala seems to get the most love from spark users. But even then the python bindings are pretty good. Scala 3 is going to be released soon, so there might be a surge of interest.
Language intricacies aside, is there a reason to use Clojure over Elixir, Erlang? Genuinely curious what JVM has to offer vs BEAM / OTP if you're going to use dynamic languages.
Clojure's version of immutability is more useful in some domains than Elixir/Erlang's. E.g. you can both safely and efficiently share memory in Clojure across multiple threads. You can't really do the same in Elixir that I'm aware of - it triggers a deep copy which can kill performance. Sometimes acceptable, sometimes not.
Elixir/Erlang processes serve a lot of roles. If those roles don't line up cleanly your code could end up a lot more complex than necessary in other languages.
In the past the JVM had better raw performance, but I'm not sure how much that might change with the new JIT in BEAM.
Practicality. Most of language "comparison" discussions miss out on the practicality aspect: does it work, does it have a good runtime (both true for Elixir and Erlang), can you write code that runs both client-side and server-side, are there good abstractions and libraries for many programming models, is it being actively maintained and developed?
Clojure ticks all of those and more, while most superficial comparisons concentrate on superficial aspects.
It has really good monitoring, profiling, interop tooling. Both JITCs it has (C2 and Graal) are very good at and optimised for compiling dynamic languages.
Also the recent JVMs have GCs that can collect enormous heaps without stuttering of any kind.
The entire syntax for Clojure fits in a single line. Its easier to learn and being as expressive as it is, the core idioms are quickly picked up as well.
So - You can get into it quickly, very quickly if you're already familiar with FP.
The brevity of the code means that you'll produce much more robust code, which takes up a lot less screen real-estate. This allows you to grasp the functionality of any code you read, very quickly and start working on the problem.
It'll go as fast as Java, but slower than C/Rust. For some performance oriented tasks, you'll have to put in more work than makes sense, to get the performance you want. But for 99% of the Apps that are being written, Clojure will perform just fine and you'll end up with better code.
Compared to Haskell or most other FPs (not F#) you get the added benefit of being on the JVM. Write once, run everywhere. Huge libs to do everything from 3d graphics to webserving.
In most cases, I use Clojure for the above reasons summed up in this one sentence: I makes me more effective than the alternative.
ps: Having enjoyed Lisps for 20 years or so, Ive never used Par-Edit :)
There’s a balance to be struck between conciseness and readability. Don’t be terse as APL and don’t be as verbose as Java. Clojure hits the sweet spot for me, it’s subjective ofc.
I disliked the syntax, but was very curious why lots of devs were fascinated by it. Started to study Lisp/Clojure in 2015 and used more and more, and nowadays I love it.
I use Clojure because I find it more more fun and interesting to program in. As a bonus, it happens to also be practical, robust, productive and safe; with great tooling, a huge ecosystem, reach to the browser, server, command line, and desktop/mobile, good performance, good scale, and an awesome community.
> In Clojure, whenever you "append" to a vector (array) you get a "new" vector and the original does not change. Anyone with a reference to the original can always count on it being the same.
This has never made any sense to me. Can someone please explain why you would still want the original vector to continue to exist with data that no longer reflects the current system? What am I missing?
To avoid race conditions and to preserve abstraction boundaries. This is the archetypical design pattern in functional languages. For a simple example, if you first check the size of a vector before accessing index `i`, you can be sure the length hasn't changed right after your size-check. It's exactly like writing code only using only immutable Strings of Java, or frozen sets of Python.
So in your example, it "continues to exist" in local variables to reflect the state of the system as it was when you read it, as long as you still hold a reference to the old vector. Typically, you'd ask your software to fetch a fresh copy of the vector any time you'd want new data. But that's explicit in code. You'll have fewer surprises if mutation (like vector-appends) are never shared between variables.
I have context scratch-pad hashmap object I pass into a top-level function. It can then be decorated with extra scratchpad data all the way down the call-stack and passed into lower functions, for them to make use of. So each function can pass stuff down, but it's not available further up the call stack. It effectively looks like a stack object in terms of its semantics: as you unwind the stack you unwind history, 'undoing' changes. And the stack can take many different paths over execution.
Functions can do pretty much anything they want to the object further down the stack, without affecting other functions' inputs (parents or siblings). If it were mutable, the functions would suddenly be coupled to each other, and could change each other's data inputs. Add concurrency to that and it gets worse.
There are other ways to do this with Clojure. But I like this method, it's obvious and easy to test. It also feels reminiscent of Prolog.
In my example I'm associating new values into a hashmap, not appending to a vector, but it amounts to the same thing.
Suppose I provide you with a black-box function foo that takes an int.
You can write the program
x = read_int_from_terminal();
y = foo(x);
println(x + ": " + y);
And you can be confident that invoking foo has no effect on the value of x that will print out on subsequent lines. x is a local variable that refers to a stateless, immutable, mathematical object. If x refers to the number 3, it will continue to do so until you personally tell it to refer to something else.
In clojure, as in other functional programming environments, a vector is also a stateless, immutable, mathematical entity. Which is nice because nobody can change its state out from under you and that makes programs easier to reason about.
There are also specific use cases where this feature may shine in a specific way, for example making it easy to maintain an "undo history" when implementing a text editor. If the state of a buffer in your editor is an immutable value then it's easy to maintain a stack or list (or whatever) of all the states of the buffer - the top of the stack being the current state - and operations on the buffer simply create a new version but do not destroy any information about prior versions.
Outside of such specific use cases, though, it's just about referential transparency and enhanced ability to reason about the interactions between different pieces of code.
The main thing you’re missing is the (real) functional programming style, which expects this kind of behavior when manipulating data. To refer to it as a reference is somewhat misleading, functional programs are just dealing with a raw block of data, which is transformed into a new block of data. But to call it the old data and the new data is kind of silly, because it’s not really about state at all.
Immutability is very useful for dealing with concurrency. For example if a thread is iterating over a vector and an other thread mutates it you don't want the first thread to get "the rug pulled from under it" so to say. If things can never be suddenly changed, you don't have to plan for that.
when you work in a functional programming style then every modification of a objects data member would create a new copy of the object; that one is mostly a copy of the old object except for the variable change that was introduced by the setter. In general that make a lot of sense for smaller objects (in Java the String is immutable, so are tuples) - it is easier to reason about the object and you don't have race conditions.
in Scala you have mutable collections and immutable collections - like that one; the more accessible versions that you have in your default namespace are immutable (that's supposed to be the default choice).
now in theory smaller contingent objects that span 'a few' cache lines would be easier to copy than to modify. Now my problem with that statement is that with the JDK you usually have lots of Object references (can't do a lot with primitive types), so you need to try hard in order to get an object that spans a few cache lines. You would have more of these in go, but they don't do a lot of functional style programming in go, afaik.
maybe it would make some sence to port clojure or scala to the golang runtime.
Definitely. If you're using Clojure on the JVM, you definitly should read up on Java exceptions even if the errors have improved already, as it'll help you debug things, but the default error messages in Clojure has improved since then. Same if you're dealing with ClojureScript, you'll need to understand JavaScript errors and stack traces then.
The error messages were what got me to stop using Clojure when I first tried it out several years ago, the revamped error messages are what helped me to stick with it to the point of building out a complex production system. Dramatic improvement.
They've improved over the years, but they're still terrible compared to something like Rust. There are more ways to sidestep the issue though, if you develop with Spec and generative tests etc.
Not an answer, but... I do think that after you've used Clojure for a while, you get used to the Java stacktraces. I mean it has the line number of the error, it can't be that bad, right? Maybe I've never experienced a language with stellar error messages.
I always think it's hilarious when Clojure enthusiasts try to address concerns about the language by talking about parentheses, as if that was actually the major barrier to entry. The parentheses are at best a mild inconvenience...many people love them, including myself, but few people cite the parentheses as a reason to not use the language after actually trying it out. A non-exhaustive list on why not clojure:
* It's slow
* Development with the REPL is slow because the startup times are glacial and REPL-oriented development usually requires tons of from-scratch restarts.
* The tooling sucks: build systems, IDEs, debuggers, etc. If you feel like writing code with just an editor and a terminal is like going back to the stone ages, you're gonna want to bash your own head in with a mammoth bone club.
* Java interop is a black art, and when you need to use it, it will ruin any sense of elegance you felt for your code originally.
* The ecosystem practically doesn't exist, unless you're willing to absorb a lot of java libraries. See above.
* The lack of static types hurts you in many ways, most of all your ability to refactor with confidence.
* Clojurescript isn't the same language, no matter what they promise you. Clojurescript is weakly typed, Clojure is strongly typed. If you aren't aware of the difference, prepare to spend weeks of your life tracking down bugs that would only be days in Clojure, and would never exist in the first place in a statically/strongly typed language.
* It's fast enough for 99% of apps out of the box. It's fast enough for 99.99% of the apps with minimal tuning.
* Yes, if your project is very big and macro heavy, it can take some time, but startup times have improved. In any case, I BARELY need to restart my development JVM. I have one currently running that I haven't restarted for 1 week+.
* Depending on what's your cup of tea, there's emacs/CIDER or IntelliJ/Cursive. They both work well. IntelliJ/Cursive is an excellent IDE combination. I use it every day.
* Java interop is very straightforward, not sure what you mean. Sure your code might not be all pure anymore, but that's the price for solving actual problems.
* Good java libraries have wrappers. A ton of original Clojure libraries as well. https://github.com/cgrand/xforms for example allows you to easily do things that I can't even imagine doing in an imperative language.
* Static vs dynamic typing: don't want to get into that.
* "Clojurescript isn't the same language". I use both Clojure and ClojureScript every day and as far as Clojure-only code is concerned, it works in both languages 99.99% of the time. One case you can encounter issues is if you do something host-specific, like dealing with numbers. That's by design. Clojure embraces each host, does not try to reinvent it. When you just use pure Clojure data structure manipulation, it works the same across both languages and works like magic.
> It's fast enough for 99% of apps out of the box. It's fast enough for 99.99% of the apps with minimal tuning.
It might be true that Clojure is fast enough for the apps you write. I know it is not true for the apps I write and have worked on in my professional career.
From a performance perspective, Clojure isn't even close to Java. Although it's probably not worse than Python or Ruby.
There are millions of apps out there that are performance sensitive enough to not use Python or Ruby, but not so performance sensitive that they couldn't use Java.
Wow, performance comparison by someone writing Clojure for the first time, wonder how that will go.
Here's what someone with more than superficial understanding of the language can produce. Try not to base your argument on the first google search result next time
Here are my observations after doing clojure for a bit more than a year coming from doing js for two decades.
It’s fast, both clj and cljs but performance is non deterministic and as with most functional languages it can be hard to reason about performance. Profiling cljs is very hard
Coming from the js world I feel that tooling generally works. Especially when it comes to build systems. Very happy to not deal with webpack inconsistencies. With regards to ides there’s cursive for IntelliJ users, Calva for vscode.
I can’t comment that much on Java interop. Js interop is a mild inconvenience.
With regards to ecosystem, google ability is an issue but tempered by the simplicity of both clj and cljs.
About types, there are a class of bugs they will help to avoid. It does help with understanding intent of the code you work with. The drawback is that they help facilitate abstraction and does not help with reasoning that much about a program actually runs.
Yes cljs is a different language. But in practice it feels very much the same. I do however have a big issue with being able to push deterministic performance out of it. Keeping execution time for any frame below 16ms can be a challenge and for some type of front end stuff js is to be preferred
All of these things are quite insignificant on how fast a team can churn out good quality code using clojure. I’ve never seen any team that I’m with at the moment write correct, readable and fast code as quick.
That article is disingenuous, the startup time of "hello world" in Clojure is still pretty bad but tolerable, the issue arises when you start to add libraries, now your web service can take from 1-2 minutes or more to start.
Okay, here is an example. Let's say you're writing a web app, and you want to stick with the JVM. Pull up with Web App Framework Benchmarks, narrow it down to Java, Kotlin, Scala, and Clojure. Let me know when you have stopped scrolling, cause it takes a while before you see the first Clojure framework.
Unfortunately, web apps are probably the thing that Clojure does best, and Clojure is already plenty slow there. Try it out for something typically throughput intensive (like mathematical programming or machine learning), or short lived (like CLI apps), and you'll give up almost immediately.
Clojure is on place #20 with a score of 1.3M vs 1.6M of the winner. That's fast enough.
More broadly: These kinf of benchmark numbers matter in a pretty small niche of web services and it rarely makes sense to make it a big factor in PL choice.
There haven't really been a lot of efforts to get a high-speed web frameworks done in just Clojure as that's not normally how professional Clojure developers work and deploy code. Not a lot of Clojure developers use frameworks in the first place, so the HTTP server ends up being something that gets pulled in as a library, and since it's running on the JVM, you use JVM servers, which are fast. vertx seems to be something that scores high, but unfortunately only the Scala binding seems to be in the benchmark you mentioned. Here's a Clojure alternative: https://github.com/vertx-clojure/vertx
Tooling, ecosystem, and Java are what killed clojure for me. I love the language and the way Rich Hickey approaches it, but I was constantly frustrated with the lack of an IDE and having to stop to deal with Java and Java tooling errors all the time.
In the end I felt like Clojure was too clever for its own good. By relying on Java (which was a great choice) it took a lot of the oxygen out of the ecosystem for other devs to build tooling and libraries, and without that there aren't as many people participating or becoming a well known community member from their work there. Again, can't say this was the wrong choice, but in my opinion ecosystem is the #1 thing for a language and the way Clojure was done had an impact on how the ecosystem could grow.
> REPL-oriented development usually requires tons of from-scratch restarts.
I heard that. I've read Clojure developers keeping the same REPL running for days and avoiding this problem, but I'd always be worried about the state of the global namespace not being what I thought it was. And this is especially true during development / exploration.
I'm going to have to disagree with almost everything you said, let's have a look:
> I always think it's hilarious when Clojure enthusiasts try to address concerns about the language by talking about parentheses, as if that was actually the major barrier to entry.
Not that they are a major barrier, but they do scare people away. I know this because I've been tempted to try lisps before and after having a look at some source code, I was "scared" of the parenthesis. They are ugly (at least I thought at first sight) and they seem like very tedious to type. Of course I know better now, but that was _the_ reason I had not to try lispy things, at least on a couple of occasions. So yes, the "the parens are A-OK!" is absolutely warranted when trying to sell Clojure or any lisp for that matter.
> It's slow
So what? If I'm not writing anything CPU bound, I'm not very concerned about that. Clojure is fast enough for most things. The performance penalty is completely justified if it takes me half the time to write an app. This is just a trade off, or do you write all your stuff in ASM?
> Development with the REPL is slow because the startup times are glacial and REPL-oriented development usually requires tons of from-scratch restarts.
This is just very much not true. First of all, nobody is forcing you REPL-oriented programming. Second of all, it doesn't take more than a couple of seconds for most apps to restart. And third, if you use "def" and "defonce" correctly you'll hardly ever have to restart the REPL.
> The tooling sucks.
That's just your opinion, man, and it's also pretty rude. Clojure have awesome IDEs and tooling, and the aforementioned REPL-development allows you to have the same "react-hot-relading" developer experience, anywhere.
> Java interop is a black art.
Java (and Javascript) interop is pretty straight forward. You call functions and instantiate classes and stuff. No naked dancing under the moon involved.
> The ecosystem practically doesn't exist.
It maybe didn't some time ago. The ecosystem right now is thriving with pretty cool projects.
> The lack of static types hurts you in many ways, most of all your ability to refactor with confidence.
That's also pretty subjective, in my opinion the only thing that gives you confidence to refactor is unit testing.
Plus there's other ways to validate parameter and return values in Clojure, and they are not limited to the type.
You'll agree, that a return value is of a given type doesn't guarantee that it's correct.
> Clojurescript is weakly typed, Clojure is strongly typed.
Plain wrong, types are semantically equal in both languages (they are weak). The semantics of the target languages are just implementation details.
You can try this on a REPL of any variant you like:
(def foo 42)
(def foo "Hello, World!")
Sharing code between Clojure and Clojurescript is fine, especially if you stick to the built-in data types and functions. Of course if you have a lot of specific interops interleaved, it's going to be painful.
> Plain wrong, types are semantically equal in both languages (they are weak).
Nope. Try evaluating (+ 1 "1") in Clojure as well as Clojurescript.
One is weak...it doesn't throw an error, might throw a warning because the compiler can plainly see that it is wrong, but you'll get no warnings when it is out of scope or evaluated at runtime.
The other is strongly typed, because it will actually throw an error, knowing that adding a string and a number is nonsense. While it would be better if this was caught at compile time, catching it at runtime is waaaaay better than not catching it at all.
The languages are actually different languages, because their type semantics are different. And you might want to figure such things out, as well as the far reaching implications, before you go around telling people they're wrong on the internet about a language that is laying in the mud like a crocodile, waiting for the perfect time to bite you in the ass.
> And you might want to figure such things out, [...] before you go around telling people they're wrong on the internet
Apply it to yourself my friend, you are wrong again. What you are talking about here is implicit type coercion, which doesn't have much to do with strong vs weak types.
Also in your example you do get a warning, so it doesn't seem to be a problem?
And if you really do not believe that to be a problem because the compiler warned you about it, I bid you the best of luck. You're gonna need it. There are dozens of ways that bug can sneak its way past the compiler where you won't get a warning because the compiler only does local type inference.
Compared to JS, yes, it's slow. You can make Clojure run faster but you will be writing Java with parenthesis not Clojure.
Startup times are indeed atrocious, in Clojure the REPL is obligatory and not optional because of this. You can avoid REPL restarts by using a third party lib like component but you have to buy into a new architecture for your program that can be overkill for many occasions.
Java interop is not a black art but it is painful, the JVM has many great libs but the majority are not, many are loaded with lots of methods returning void, data models like everything needs to a be subclass of an abstract class, etc... So while you can develop with time how to write ad-how Clojure wrappers for your use case faster, you waste lots of time doing it. And yes, this something you have to continually di cause as you said, the ecosystem is very poor.
Without going into the static vs dynamic typing debate, I would say Clojure is at the top of the dynamic languages spectrum (the best in its class if the JVM fits nicely the problem at hand).
Clojurescript's interop with the JS ecosystem is crippled by relying on the closure compiler, which doesn't play well with anything in the JS ecosystem.
Regarding REPL restarts: You don't need to use stuff like component. There are far lighter weight alternatives with no requirements for app structure. I use "mount" [0] for this. You define "start" and optionally "stop" code for anything in your app that you consider stateful and would like to reboot (like db conn pools, config loaded from files / env etc). No interfaces / protocols, easy app restarts within the same repl, no enforced structure in your app. You just use the resources as if they were def'd vars in a namespace (because they are).
Regarding ecosystem and interop, in my experience (using clojure for about a third of the stuff at my job) I've rarely encountered a problem directly interop-ing with a java library, things like "doto" and "reify" do a good job of smoothing the rough java edges off.
More importantly I've usually had the choice of either directly using the using a pure clojure alternative or direct interop with java lib or using a clojure wrapper around the java lib. Incidentally those are my preferred choices in order (assuming the features I'm interested in are supported equally).
Perhaps I have been lucky in my requirements from the clojure / java ecosystems. I find the most important lesson I learned was to only use clojure wrappers if they are of a supremely high quality (either auto generated like cognitects aws lib or with a massive amount of momentum behind them like clj-http (wrapping on java http components)). An average quality or not super actively maintained wrapper is much worse than direct interop (again leaning heavily on the provided macros for interop to sand off the nastiness).
I'm not sure I see the benefits here unless you are buying completely in to: Emacs, and Java and ignoring performance/overhead. Closure is implemented in Java; and the only apparent way to write it is via Emacs.
The zero-eth issue I see: functional programming espouses absolutely no side effects, meaning no capability of handling network of physical I/O errors
The implementation being in Java means that it is not possible to use it in embedded environments (which might not be a problem for some users) but it does mean that performance is JVM limited.
Personally, I find the requirement of Emacs to be QUITE odious. In my opinion, Emacs is an OS/environment and not an editor. I'll use it if I want to edit a binary, but when an editor includes a psychotherapist mode (Eliza) it is not suitable for software development.
I have also seen some quite talented researchers/engineers spend multiple seconds trying to remember the sequence to do X in Emacs, when it would have taken FAR less time in vi or vim. Too many parentheses make the code UNMAINTAINABLE in a production environment.
- Intellij IDEA with the Cursive extension is very popular outside of emacs (I've met more clojure developers who use IDEs than those who don't.)
- Clojure uses the error handling mechanisms of the target runtime. You have try/catch statements and side effects are often used. It's not a no-side-effect language.
- Parentheses are almost always managed with parinfer/paredit and python-style indentation rules in production code I've seen.
You're quite right that performance will be tied to the JVM or V8/SpiderMonkey/etc.
Has anyone seen a big - multi year system done in FP? Lots of people love FP, it seems great for your own side project, but I'm not convinced it works in those typical big corporate systems where devs turn over every few years as the code base grows.
Technical debt can exist in any product and for a variety of reasons: inexperience, frequent employee churn, lack of leadership, frequent change of product direction, etc. Your comment implies that the technical debt is with Clojure the language or because of it. Beware of single-cause explanations of inherently complex problems.
I've designed and worked on systems with both OOP and functional-style code. Clojure is a tool that very much helps minimize technical debt. I think the minimal use of managed state in Clojure plays a crucial part.
Having done a good amount of looking through the Clojure language code itself, there's very little (if any) technical debt there.
I've worked in big F# projects in banking. It's absolutely superior to C# or Java in many ways. One notable drawback is "how much code can new hires write in their first month" which is not a metric I consider that important for big enough projects. The month or so needed to skill up a C# or Java programmer in F# is a drop in the bucket compared to the benefits it brought us.
Pitch (from Berlin) is using Clojure as well as Reagent with React. From what I saw it's quite a moderately sized app (not a gigantic one, but not a side-project sized either, in between that's what I mean).
We see that a lot in other languages too (taking curly braces as parens too).
There have been many attempts at languages that separate (or even do away with) the textual representation vs what is shown and interacted with in integrated tooling. Seemingly it has nice properties like forbidding invalid syntax. So far the common textual representation seems to be important enough to win over the AST languages.
they are logically equivalent (of course) and the parentheses become invisible once you are used to them. Every language has a threshold of "fluency" which, once you reach it, makes it easy to understand at a glance.
Retooling to use nesting boxes has obvious appeal to a newcomer, but none to an existing programmer, and the cost to the corpus of existing programmers is far too large to justify.
I no longer think language matters much. I like some more than others but they’re all a bag of hurt, you just get to pick which kind of hurt. I agree that pure functions are nice. But things like this end up mattering a lot less day-to-day than dumb-guy practical stuff like packaging and tooling. The important thing about clojure is that it runs on the jvm.
The biggest problem is that it's very hard to picture why, at the end of the day, all the choices that went into Clojure come together into a productive whole for building real-world software. It's a really nice mix of terseness without preventing clarity, simple lightweight modelling paradigms, interactive development, easy access to multiple cores, and all on top of the JVM with its enormous ecosystem. It's not as Lispy as other Lisps. It's not as pure as other functional languages. It doesn't have a fancy type system. It doesn't have native performance. But it gets stuff done and it does so fairly elegantly in most cases. I've found it a really solid career choice, there's really very little that you can't solve in a satisfying way. Plus whatever you think about parentheses, Clojure syntax basically hasn't changed in 10 years, most new features are just libraries of new functions and macros, and for me that validates that it's the correct approach.