I have worked Elixir/Erlang and Rust a lot, and I agree. Rust in particular gives ownership semantics to threaded/blocking/locking code, which I often times find _much_ easier to understand than a series of messages sent between tasks/processes in Elixir/Erlang.
However, in a world where you have to do concurrent blocking/locking code without the help of rigorous compiler-enforced ownership semantics, Elixir/Erlang is like water in the desert.
People talk about SQLite's reliability but they should also mention its stability and longevity. It's first-class in both. This is what serious engineering looks like.
As the time horizon increases, planning for the future is necessary, then prudent, then sensible, then optimistic, then aspirational, then foolish, then sheer arrogance. Claiming 25 years of support for something like SQLite is already on the farther end of the good set of those adjectives as it is. And I don't mean that as disrespect for that project; that's actually a statement of respect because for the vast majority of projects out there I'd put 25 years of support as already being at "sheer arrogance", so putting them down somewhere around "optimistic" is already high praise. Claiming they've got a 50 or 100 year plan might sound good but it wouldn't mean anything real.
What they can do is renew the promise going forward; if in 2030 they again commit to 25 years of support, that would mean something to me. Claiming they can promise to be supporting it in 2075 or something right now is just not a sensible thing to do.
Having a plan for several hundred years is possible and we've seen such things happen in other facets of life. We as humans are clearly capable of building robust durable social organizations, religion and civics both being testaments.
I'm curious how these plans would look and work in the context of software development. That was more what my question is about (also only being familiar with sqlite taking this seriously).
We've seen what lawyers can accomplish with their BAR associations and those were created over 200 years ago in the US! Lawyers also work with one of the clunkiest DSLs ever (legalese).
Imagine what they could accomplished if they used an actual language. :D
I’d be interested to know what you would classify as having been planned to last hundreds of years. Most of the long term institutions I can think of are the results of inertia and evolution, having been set up initially as an expediency in their time, rather than conforming to a plan set out hundreds of years ago.
The Philadelphia BAR Association was established in ~1800. I doubt the profession of law is going to disappear anytime soon, and lawyers have done a good job building their profession all things considered. Imagine if the only way you could legally sell software was through partnerships with other developers?
Do you think such a thing would have helped or hurt our industry?
What I mean is that the bar was set up for the lawyers themselves at that time. They didn’t create a 250 year plan for a Philadelphia bar that has played out in all that time and gotten us to today. It’s stayed in existence because it happened to stay useful for the lawyers that followed after them. Law itself is a collection of decisions made by judges and juries in trials, not decisions that are calibrated to have an impact over hundreds of years. Institutions are more like organisms that evolve, trying to adapt to the environment they find themselves in. The ones that work are able to stick around, and the ones that don’t die off.
You don't see an institution that established useful norms persisting for lifetimes as one worth preserving and emulating?
I do.
Medieval guilds are another equivalent but they could not deal with the industrial revolution or colonialism, so they don't seem like something worth studying (outside of their failures) if it can't deal with societal change.
You’re missing what I’m saying. I’m not commenting at all on their usefulness. I’m saying that the motivating factors that build and grow institutions are short term. Institutions last because people happen to find them useful over successive short time horizons, or they’re able to change them to suit the needs of the time. There’s no super long term planning in it, some happened to have the right combination of elements and some didn’t.
They should write a book on "Design and Implementation of SQLite". And make a course as well. That would interest a lot of people and ensure future generations pick up where they decided to retire.
I’m trying and failing to think of another free software product I honestly expect to still work on my current data past 2050. And this isn’t good enough?
It's good, but this also assume that the people taking care of this product in the future (which may not even born right now) will hold the same attitudes.
How do we plan to make sure the lessons we've learned during development now will still be taught 300 years from now?
I'm not putting the onus on sqlite to solve this but they are also the only organization I know of that is taking the idea seriously.
Just more thinking in the open and seeing what other people are trying to solve similar problems (ensure teachings continue past their lives) outside the context of universities.
Thinking like this is how we ended up with a panic about Y2K. Programmers in the 1970s and 80s could not conceive that their code would still be running in 2000.
The computing industry was in a huge amount of flux in the 1970s. How many bits are in a byte? Not a settled question. Code was being rewritten for new platforms all the time. Imagining that some design decisions would last for decades probably seemed laughable.
Some churn is fads, but some is legitimate (e.g. "we know how to do this better now".) Every living system is bound to churn, and that's a good thing, because it means we're learning how to do things better. I'm happy to have rust and typescript, for instance, even though they represent some amount of churn for c and javascript.
I think it starts with us collectively not using boring tech as a term anymore. If boring helps me be productive, that's exciting, not boring.
Some people on the React team deciding in 2027 to change how everyone uses React again is NOT exciting, it's an exercise in tolerating senior amateurs and I hate it because it affects all of us down to the experience of speaking to under-qualified people in interview processes "um-ackchyually"-ing you when you forget to wrap some stupid function in some other stupid function.
Could you imagine how incredulous it would be if SQLite's C API changed every 2 years? But it doesn't. Because it was apparently designed by real professionals.
Hey, didn't you write kjbuckets and Gadfly? Or was that Aaron Watters? I was thinking about that the other day: that was one of the coolest pieces of software for Python 2 (though I think it predated Python 2): an embedded SQL database without needing SQLite's C API. I suppose it's succumbed to "software rot" now.
I think "boring software" is a useful term.
Exciting things are unpredictable. Predictable things aren't exciting. They're boring.
Stock car racing is interesting because it's unpredictable, and, as I understand it, it's okay for a race car to behave unpredictably, as long as it isn't the welds in the roll bars that are behaving unpredictably. But if your excitement is coming from some other source—a beautiful person has invited you to go on a ski trip with them, or your wife needs to get to the hospital within the next half hour—it's better to use a boring car that you know won't overheat halfway there.
Similarly, if a piece of software is a means to some other end, and that end is what's exciting you, it's better to use boring software to reach that end instead of exciting software.
I thought you were about to say go on a ski trip with your mistress while your wife is 9 months pregnant. That'd be exciting too, but in a bad/awful way.
I agree, it’s “understood” tech, not “boring” tech. It’s only boring because it’s simplicity and usefulness is obvious. It’s only boring because there are few to zero use cases left to discover application of the tech. The tech isn’t boring, the person is boring.
> ..software from 25 years ago are still maintained and working
Interesting question. Seem to me that a lot of opensource software from the year 2000 is still being maintained and used. Closed source software? Not as much.
Something to keep in mind when you are looking at software options for your next project.
Rebinding is not mutation. This seems pedantic but is an important distinction. None of the semantics of the runtime are changed. The data remains immutable. You probably know this. However, for the benefit of readers who may be less familiar: Erlang does not allow variables to be rebound, so it's somewhat typical for Erlang code like this:
X1 = 8.
X2 = X1 + 1.
X3 = X2 * 302.
You cannot, say, do this:
X1 = 8.
X1 = X1 + 1.
This is because in Erlang (and in Elixir) the `=` is not just assignment, it is also the operator for pattern matching. This has implications that are too broad for this post, but the key point here is that it's attempting to see if the the left side and the right side "can match".
Whereas writing the same thing in Elixir would look like:
x = 8
x = x + 1
x = x * 302
This is because Elixir allows `x` to be rebound, in effect changing what data `x` points to, but not mutating the underlying data itself. Under the hood Elixir rewrites each expression into something resembling the Erlang version.
The practical effect of this is that if you for example insert a process spawn somewhere in between any of the lines that references `x`, that process gets its own totally immutable version of the data that `x` points to at that point in time. This applies in both Erlang and Elixir, as data in both is completely immutable.
It should also be noted that handling state like that is not really idiomatic Erlang. State is updated at a process level, thus traditionally you spawn another process which is trivial to do. On the BEAM that is fast enough for 95% of cases. If you really need mutation on local variables for performance reasons, you should already be writing NIFs anyways.
State variables are what I think corpos call a "code smell". The BEAM/OTP isn't a number cruncher, there are better tools out there if you're doing a lot of that. Erlang is, at it's core, about constraint logic programming. It should be best thought as a tool for granular, scalable, distributable userspace scheduling. If you need something outside of that, NIFs or Ports. Both are quite nice.
This has nothing to do with math or number crunching on the BEAM. This has nothing to do with mutation. This has nothing to do with performance.
This kind of process and function-local static single-assignment code is all over the place in Erlang codebases. It's incredibly common. The other popular method is tail recursion.
I searched for literally 30 seconds and found these:
> It should also be noted that handling state like that is not really idiomatic Erlang.
It's not about the state but about intermediate results. When you have a value that you pass to one function, and then you need to pass the result to another function, you're not dealing with a "state" as OTP defines it, unless the calls are asynchronous. Often, they're not, and that's where variable rebinding comes in.
Worth noting: `|>` macro operator in Elixir serves a similar purpose, as long as you don't need pattern matching between calls. In that case, you don't have to name intermediate results at all, resulting in cleaner code.
> State variables are what I think corpos call a "code smell".
Having to call multiple functions in a sequence is the most natural thing to do, and Erlang code is littered with "X1 = ..., X2 = ...(X1), X3 = ...(X2)" kind of code everywhere.
There are some libraries (based on parse transforms) that introduce a sort of "do" notation to deal with this issue (erlando and its variations come to mind).
I love it. I didn't know. It's going to take a while to make this a pervasive feature of most Erlang codebases, but it seems like a good feature to introduce.
I know there are monad libraries using parse transforms and/or list comprehensions, but I often found their use is frowned upon in the Erlang community. I kind of assumed the GP in this thread would reject them, given their negative opinion on macros.
I was in a similar situation, ended up relying on libs that used parse transforms a lot and then found out most of my usage could have been replaced by the new `maybe` expression.
Full disclosure: I started with Erlang, I get paid to work with Elixir every day, I love Erlang still.
Why someone might like Elixir:
- slightly less crufty stdlib for a lot of the basic stuff (though we still use the Erlang stdlib all the time)
- the Elixir community started off using binaries instead of charlists so everything uses binaries
- great general collections libraries in the stdlib that operate on interfaces/protocols rather than concrete collections (Enum, Stream)
- macros allow for default impls and a good deal less boilerplate, great libraries like Phoenix and Ecto, and the community seems to be pretty judicious with their use
- protocols allow datatype polymorphism in a really nice way (I know about behaviours, they are also good)
- very standard build tool/project layout/generators that have been there from the start (Erlang has caught up here with rebar, it seems)
- a lot of high quality libraries for web stuff, specifically
- convenience stuff around common OTP patterns like Task, Task.Supervisor, Agent, etc.
For me, I love the clarity and brevity of Erlang the language but I find Elixir a lot more pleasant to use day-to-day. This is just personal, I am not making a general statement saying Elixir is better.
> Last I checked, the debugging experience with elixir was pretty subpar.
Just curious, why is this? All of the Erlang debugging stuff seems to work.
> Just curious, why is this? All of the Erlang debugging stuff seems to work.
But you'd see a decompiled Erlang-ish code in the (WX-based, graphical) debugger, no? Genuinely curious, I think it was like that last I checked, but that was in 2019.
To a first approximation HN is a group of people who have convinced themselves that it's a high quality user experience to spend 11 seconds shipping 3.8 megabytes of Javascript to a user that's connected via a poor mobile connection on a cheap dual-core phone so that user can have a 12 second session where they read 150 words and view 1 image before closing the tab.
Fast is _absolutely not_ the only thing we care about. Not even top 5. We are addicted to _convenience_.
The fact that this article and similar ones get upvoted very frequently on this platform is strong evidence against this claim.
Considering the current state of the Web and user application development, I tend to agree with regard to its developers, but HN seems to still abide by other principles.
It's not that they convinced themselves, but that they don't know how to do any better. It is as fast as it can be to the extent of their knowledge, skill, and ability.
You see some legendary developers show up on HN from time to time, sure, but it is quite obvious that the typical developer isn't very good. HN is not some kind of exclusive club for the most prestigious among us. It is quite representative of a general population where you expect that most aren't very good.
This kind of slop is often imposed on developers by execs demanding things.
I imagine a large chunk of us would gladly throw all that out the window and only write super fast efficient code structures, if only we could all get well paid jobs doing it.
This is all well and good that we developers have opinions on whether Go compiles faster than Rust or whatever, but the real question is: which is faster for your users?
...and that sounds nice to me as well, but if I never get far enough to give it to my users then what good is fast binaries? (implying that I quit, not that Rust can't deliver). The holy grail would be to have both. Go is generally 'fast enough', but I wish the language was a bit more expressive.
I've never taken a lick of CS in a formal setting, and I feel like that's increasingly common as programming has broadened its employment base. Most of the people I work with haven't done formal CS, either. Consequently I've had to learn all of this stuff on my own. There are communities out there that value education this kind of education. I can vouch for the Rust community, which has helped me learn _a ton_ about this kind of "lower level" stuff over the last 5 years despite starting my career doing Ruby.
This is not exactly what you’re referencing but I bring it up to show just how complicated things can be: Minnesota recently ruled that you do not have the right to use deadly force if you have the opportunity to escape.
And this is the crucial bit, quoting the article: “The court decided the principle also applies to people who merely use the threat of force — meaning one cannot pull a weapon in self-defense if there are other means to escape, even if the person is threatening them with death or bodily harm.”
> Elixir abstracts that away and leaves a ruby-like language that hides much away - which good and fine.
Processes, message passing, and behaviours are all completely first class in Elixir. There's no hiding. `spawn` is `spawn`. `send` is `!`. `GenServer` is `gen_server`. `@behaviour` is `-behaviour(...)`. The entire Erlang stdlib is available directly.
We use processes, messages, and behaviours all the time in regular Elixir work.
Elixir adds a different syntax (note that I did not say better), a macro system, a protocol system, its own stdlib functionality, some better defaults, and a build tool.
It's perfectly fine and reasonable to prefer Erlang (I learned Erlang before I learned Elixir), but for the benefit of other readers, they are really not that different. The distance between Elixir and Erlang is very small. They could almost be seen as dialects.
However, in a world where you have to do concurrent blocking/locking code without the help of rigorous compiler-enforced ownership semantics, Elixir/Erlang is like water in the desert.
reply