I saw this in my codebase firsthand. At first, we had to communicate with one device. Then two. For that second device, I subclassed the class that handled the communication with the first thing, and changed 3 methods in the subclass. Now it was possible to substitute that second device for the first, allowing a different product configuration. After a few years, we had to support a third device type, and now they wanted it to be possible to have more than one of them, and to mix and match them.
Supporting the third device was handed off to a junior dev. I pointed him at my subclass and said to just do that, we'd figure out the mixing and matching later. But he looked at my subclass like it was written in Greek. He ended up writing his own class that re-imagined the functionality of the superclass and supported the new device (but not the old ones). Integrating this new class into the rest of the codebase would've been nigh impossible, since he also re-implemented some message-handling code, but with only a subset of the original functionality, and what was there was incorrect.
His work came back to me, and I refactored that entire section of the code, and this is when the generalization occurred: Instead of a superclass, I took the stuff that had to be inherited and made that its own thing, having the same interface as before. The device communication part would be modeled as drivers, with a few simple functions that would perform the essential functions of the devices, implemented once per device type. I kept the junior dev's communication code for the new device, but deleted his attempt to re-imagine that superclass. Doing it this way also made it easy to mix and match the devices.
The more you contribute to Java, the bigger the problem gets. Java will never die if people keep feeding it, and it'll never be a good language, because that's impossible.
The real problem is you cannot ever marshal an army of cheap Lisp programmers, because Lisp programming requires not only learning but raw ability. The big companies are searching for a language that any idiot can learn in a week, with the hope that they can hire thousands of them now, and fire them all next year when LLMs are slightly smarter.
They run into the problem that programming is inherently hard, and no amount of finagling with the language can change that, so you have to have someone on every team with actual talent. But the team can be made of mostly idiots, and some of them can be fired next year if LLMs keep improving.
If you use Lisp for everything, you can't just hire any idiot. You have to be selective, and that costs money. And you won't be able to fire them unless AGI is achieved.
> you cannot ever marshal an army of cheap Lisp programmers
That may be, but since Lisp programmers are easily 10x as productive as ordinary mortals you can pay them, say, 5x as much and still get a pretty good ROI.
> you can't just hire any idiot
Yeah, well, if you think hiring any idiot is a winning strategy, far be it for me to stand in your way.
I don't think it's a winning strategy, but I'm in no position to make hiring or programming-language decisions, and I don't have the market insight that would be required to start my own company.
When the code needs to do something that it can't do, there is a massive cognitive load associated with figuring out how to do something approximating it. When the language or a framework is associated with lots of "how do I X" questions, the answers to which are all "completely rethink your program", that is evidence that the language/framework is not reducing cognitive load.
I'm optimizing for a large, complex corporate projects, not beginners on toy projects. And I would also add that if you search for "how do you do X in Y language", you'll probably find every combination of a lot of languages so I hardly think that is grounds to dismiss Clojure. More and more languages seem to be embracing immutability and static functions are everywhere. The two are just utilized well and believe me, if you need to refactor something then you are so much more at liberty which is certainly a high bar IMHO.
>I'm optimizing for a large, complex corporate projects, not beginners on toy projects.
There's nothing about large, complex corporate projects that demands that languages impose arbitrary restrictions on code, except the fact that so many corporations insist on hiring incompetent fools as programmers, often to save money in the short run by expanding the potential labor pool. They call them "guardrails", but a better metaphor would be the playpen. If you hire only competent developers, then you don't need to put them in a playpen.
>And I would also add that if you search for "how do you do X in Y language", you'll probably find every combination of a lot of languages so I hardly think that is grounds to dismiss Clojure.
Well yeah, it's pretty much the norm in popular programming languages to make certain things impossible. And programming is driven by fads, so we're going to see more and more of this until it finally goes out of fashion one day and some other fad comes along.
Please elaborate what programming language you think isn't a fad and isn't a playpen and why. Restrictions on languages IMHO clearly narrow what you have to think about other code doing. Whether it's marking a field "const" in C++ or having the borrow checker in Rust or having private fields in Java or immutability in Clojure, all those things make it easier to know what a large system of code /cannot/ do, and that makes it easier to maintain. That has nothing to do with people other than it might be you fighting years of code that you wrote yourself.
People's choice of editor is influenced by what they're editing. For example, virtually every Lisp programmer uses Emacs, even though there are many alternatives out there, including VS Code plugins. And virtually every Java programmer uses a JetBrains IDE or something similar. I'd probably install an IDE if I had to work on a Java codebase. Editing with a diversity of programs isn't universal.
>The point generalises beyond semicolons; everything you leave to context is something other people have to load up the context for in order to understand.
This is not true, because an editor can add any on-screen hints that are needed to help a human understand the code. For example, in my editor, Python code gets vertical lines that indicate where the different indentation levels are, so I can easily see when two lines of code far apart on the screen are at the same indentation level, or how many indentation levels lower the next line is after a long, highly-indented block. Python could add an end-of-block marker like Ruby does to make things like this easier to see, or it could try to encode the vertical lines into the language somehow, but I'd derive no benefit because the editor already gives me the visual clues I need.
If some program can generate that code automatically, the need to generate it, write it to disk, and for you to edit it is proof that there is some flaw in the language the code is written in. When the generator needs to change, the whole project is fucked because you either have to delete the generated code, regenerate it, and replicate your modifications (where they still apply, and if they don't still apply, it could have major implications for the entire project), or you have to manually replicate the differences between what the new version of the generator would generate and what the old version generated when you ran it.
With AST macros, you don't change generated code, but instead provide pieces of code that get incorporated into the generated code in well-defined ways that allow the generated code to change in the future without scuttling your entire project.
>others to come along and easily read/modify something verbose without having to go context-switch or learn something.
They're probably not reading it, but assuming it's exactly the same code that appears in countless tutorials, other projects, and LLMs. If there's some subtle modification in there, it could escape notice, and probably will at some point. If there are extensive modifications, then people who rely on that code looking like the tutorials will be unable to comprehend it in any way.
It goes further than that. If you find someone who's almost dead but not quite, you can steal that person's organs, netting you way more than metals, weapons, or anything else you're likely to find on a battlefield.
This is also how Linux became the server OS of choice for most companies, while Windows NT remained a niche product only found at a certain kind of company. And before that, easy availability of UNIX to students made it the default OS of any high-end computer throughout the 1980s and well into the 1990s.
Supporting the third device was handed off to a junior dev. I pointed him at my subclass and said to just do that, we'd figure out the mixing and matching later. But he looked at my subclass like it was written in Greek. He ended up writing his own class that re-imagined the functionality of the superclass and supported the new device (but not the old ones). Integrating this new class into the rest of the codebase would've been nigh impossible, since he also re-implemented some message-handling code, but with only a subset of the original functionality, and what was there was incorrect.
His work came back to me, and I refactored that entire section of the code, and this is when the generalization occurred: Instead of a superclass, I took the stuff that had to be inherited and made that its own thing, having the same interface as before. The device communication part would be modeled as drivers, with a few simple functions that would perform the essential functions of the devices, implemented once per device type. I kept the junior dev's communication code for the new device, but deleted his attempt to re-imagine that superclass. Doing it this way also made it easy to mix and match the devices.
reply