Hacker News new | past | comments | ask | show | jobs | submit login
Stability by Design (potetm.com)
117 points by potetm 21 hours ago | hide | past | favorite | 54 comments





I'm currently struggling with instability in the Rust 3D graphics stack.

All this stuff has been around for about five years now, and was mostly working five years ago. The APIs should have settled down long ago. Despite this, there are frequent "refactorings" which cause breaking changes to APIs. (I'm tempted to term this "refuckering".)

Some of this API churn is just renaming types or enum values for consistency, or adding new parameters to functions. Some changes are major, such as turning an event loop inside out. Code using the API must be fixed to compensate.

Because of all the breaking changes, the related crates (Wgpu, the interface to Vulkan, etc., Winit, the interface to the operating system's window manager, and Egui, which handles 2D dialog boxes and menus) must advance in lockstep. There's not much coordination between the various development groups on this. Wgpu and Winit both think they're in charge. and others should adapt to them. Egui tries to cope. Users of the stack suffer in silence.

When there's a bug, there's no going back to an older version. The refuckering prevents that. Changes due to API breaks are embedded in code that uses these APIs.

I'm currently chasing what ought to be a simple bug in egui, and I've been stuck for over a month. The unit tests won't run for some target platforms that used to work, and bug reports are ignored while new features are being added. (Users keep demanding more features in Egui, and Egui is growing towards web browser layout complexity.)

Most users are giving up. In the last year, three 3D rendering libraries and two major 3D game projects have been abandoned. There's are about two first-rate 3D games in Rust, Tiny Glade and Hydrofoil Generation, and both avoid this graphics stack.

The "Stability by Design" article is helpful in that it makes it clear what's gone wrong in Rust 3D land.


I think this is just the nature of Rust - everything needs to be specified upfront by the developer. Other languages are much more flexible in this regard.

It's also why we see so few original projects in Rust (compared to Go, etc), and so many rewrite-in-Rust projects: A rewrite of `ls` or `grep` is a project that has an engraved-in-stone requirement.

Creating an entire new project requires more flexibility as the requirements are only fully specified once some user feedback is in.

It would not be wise to choose Rust for something of an exploratory nature; anything original is going to be painful as large-scale refactors (which are a necessity in an original project) are going to be particularly painful.


Just wanted to mention that Slint has had a stable API for over two years now. (We released Slint 1.0 more than two years ago and have kept things backward compatible since) So stable GUI APIs in Rust are possible. If you're looking for something solid to build on, Slint might be worth a look.

That sounds like a complete tirefire tbh. The exact thing that I'm hoping to convince people to stop doing.

I'm glad the article was helpful though!


All the players think they're doing the right thing. Each group is doing a reasonably good job based on their own criteria. But their collective actions create a mess.

This is an issue that plagues modern open-source libraries in general, but the ones you mention, Winit and Wgpu, are particularly awful examples of this disease. Winit turning the event loop inside out might have had a reason on some platform, though I don't see why the old API couldn't have been supported anymore. Both projects frequently shuffle around their data structures with no rhyme or reason. Everything built on top of them breaks all the time, for tiny perceived gains in naming consistency. I think your suggested term "refuckering" is indeed a nice way to put it.

That's an issue on projects managed by inexperienced people. The license has little to do with this.

However I find that inexperienced people tend to coalesce around some languages rather than others.


In a world where everything means everything, fighting ambiguity is not an easy task. Hence the refactoring.

> When there's a bug, there's no going back to an older version.

This practice is developed by the industry for a reason. It would be really silly to hear people running AI in Windows XP.

> Most users are giving up.

Rust is pretty new, users shouldn't rush to switch a stack because of hype. It takes time for languages to mature, even more for the ecosystem.


Sorry to hear that.

Is it fair to assume that every individual library/API can somewhat easily create a brand-new-world with every release (because it won't compile until the types are "re-aligned") yet they don't bother to check if the new release works with any other library/API?

I think the problem is partially cultural with a specific ecosystem but also fundamental. It takes a lot of type craft, care and creativity to design future-proof function/method signatures that are "open" to extension without breakage.


This is what you get if you start by making a library. First step should be making a program and once you have a fully functional, real world application, you should consider abstracting parts of it into a library.

I think most good libraries (regardless of language) are born out of a process like you described.

When I made the comment above, I assume a library has already passed that filter.


There are multiple combos. Egui can run on top of Wgpu or the simpler Eframe. Wgpu can be used with various 2D UIs on top. Winit is used by many non-3D programs. Getting all the combos right is tough.

This refuckering reminds me a lot of the JS ecosystem...

I like the term refuctoring for this

One great way to have stability is to rebase old code against new code

Say, you have the first function, with a specific signature and it does its stuff

Then later, you improve the stuff and the signature changes with a breaking change

Do not do that

Instead, create a new function (with a new signature) and push in it the whole code

And rewrite the old function to use your new function

This way, you keep one "production code": the new function. And you keep one interface-to-legacy code: the old function (which is nothing but a compatible gateway to the new function, and can be easily forgotten)

Old users have no breaking, new users have features, you keep a single code and are only burdened with a small compatible layer


It's true that if you always add new functions to your library instead of changing existing ones then users can upgrade without breaking. There is real value in that.

But woe unto the user who first starts using your library after a decade of that "evolution" and they are faced with a dozen functions that all have similar but increasingly long names and do very similar things with subtle but likely important differences. (I guess a culture of "the longest function name is probably the newest and the one you want" will emerge eventually.)

Personally, I like when a library's API represents the best way the author knows to tackle a given problem today without also containing an accumulated pile of how they thought the problem should have been tackled years ago before they knew better.

If I want the old solutions, that's what versioning is for. I'll use the old version.


> If I want the old solutions, that's what versioning is for. I'll use the old version.

And you'll miss all the stuff you do not want the old solution for. And all the old bugs

> faced with a dozen functions that all have similar but increasingly long names and do very similar things with subtle but likely important differences.

Unless all the old versions are marked add old/deprecated and can be hidden from your view. Then you only care about the old stuff if you used it before and don't want to change


Agreed. Create a new library if there’s truly a better way. That does seem to be what happens in Clojure from what I’ve seen.

> I told Martin as much, and he agreed without hesitation that we needed to find a solution that didn't break current users' code. This is not a normal interaction amongst software engineers

I find this to be extremely "it depends": this is a normal interaction amongst software engineers, if you have mature engineers.

It's only not normal in those ecosystems (Everyone knows which ones those are) which appear to have not had adults in the rooms during the design phase.

If you've only been developing for the last decade or so, you may think it's normal that stuff breaks all the time when software is upgraded. If you've been developing since the 90s you'd know that "upgrades brings breakage" is a new phenomenon, and the field (prior to 2014 or thereabouts) is filled with engineering compromises made purely to avoid breakage.

It's why, if I have an application from 10 years ago in React, I won't even try to build it again, but if I have an application from 1995 in C, it'll build just fine, dependencies included[1] (with pages of warnings, of course).

[1] C dependency graphs are small and shallow, because of the high friction in adding third-party libraries.


To be fair, React itself has been exceptionally stable compared to the rest of the JavaScript / Node.js ecosystem. It's the other packages that are causing build failures, not React. Yes, React did deprecate and eventually remove some features, and those were real breaking changes. But it was at a far lower pace than every other package out there.

fair

>For example, over its lifetime the Clojure community has shifted from accepting argument lists and named parameters in their functions to accepting a single hashmap. This is because the single hashmap is easier to grow over time.

This seems a little nuts, to be honest. It feels like you're just pushing failures from easy-to-diagnose issues with the function signature, to hard-to-diagnose issues with the function body. Basically the function body now has to support any combination of hash parameters that the function has ever taken over its entire history -- Is this information documented? Do you have test coverage for every possible parameter combo?


It is nuts, especially in ClojureScript.

"Am I missing a key, is the value in the hashmap nil, or was there an upstream error or error in this function that is presenting as nil?"


I'm sympathetic to this idea, but in practice it's very manageable. Function signatures destructure exactly the data that they need, so it's easy to tell what's required and what's optional.

Of course, normal rules apply like, "Don't pollute your program with a proliferation of booleans."


no, you pull out the things you need from the map in the function body and move forward. I recommend either reading up on how this works in Clojure or just trying it, it's pretty simple.

https://clojure.org/guides/destructuring#_associative_destru...

Some extra reading if you're curious: https://softwareengineering.stackexchange.com/questions/2723...


I think it was meant from old style functions to new style, but not changing a given function’s signature this way.

Caveat that I only have a tiny amount of experience with Clojure and none at all with Scala, so this isn't coming from a place of language knowledge in those fields.

I'm not convinced though that the Clojure graph represents something I'd view positively. Notably, the Scala codebase gets smaller at one point, which looks fantastic to me. Nobody want the world changing under there feet every five seconds, but if code just accumulates without refactors, the end product often becomes not only unusable, but also difficult to replace.

Personally, I'd much prefer more frequent refactors in a changing codebase (vs code addition) but with a strict adherance to semvar, that way, I can stick on v3 or whatever if I'm worried about v4 breaking things for me, but the project itself doesn't need to worry about stagnating, or being stuck with design decisions that aren't relevant anymore.

I'm always happy when I see a library like pyarrow that has high version numbers, because I take it as a sign that they're most likely actually following semvar, as opposed to libraries that stay on v0 for 10+ years.


This is really surprising.

One of the first things I learned when I started shipping APIs and software packages is that the behavior you expose and the interface you provide for it have to be written in stone. The alternative is to break code and other packages that depend on it. Your package's changes are always somebody else's bugs. Compile-time errors and static analysis provide helpful guarantees and reduce suffering, but that doesn't change the fact that when you introduce changes you're still throwing Legos on the floor of someone's bedroom while they sleep.

Major-version revisions by definition allow for breaking changes, but a breaking change needs strong justification: either you know nobody or few users are using what you've changed, or there's broad consensus that there's a bug (but one that didn't need to be fixed with a minor-version update, so it's probably a design problem, not a code bug), or the people using it broadly agree on the need for or benefit of the change.


As a recent Clojure convert of 3 years or so, I love reading how amazing Clojure is. As a solo Dec there is simply no alternative. It’s so nice to return to a project 2 or 3 years ago and everything is still running and humming along smoothly as it did when I started the project.

I previously worked in PHP, Perl-cgi, Java, and Python- webtools mostly based on MySQL and other SQL database flavours.

I worked in a Clojure only shop for a while and they taught me the ways after that you don’t go back. Everything can quickly click into place, it’s daunting to start the learning curve is very unsteep, takes long to get anywhere, but as a curiosity it was fun, then I started to hate how everything else was done now I’m sold my soul to the Clojure devil.


love it!

When I asked ChatGPT for help with Zig during a game jam, it gave me a bunch of code that was no longer valid.

It used to be, but the compiler changed.

I understand it's an extreme example — Zig is both niche and pre-1.0 — but it was the first thing that came to mind as a counterexample to, an API that never changes.

It's not just an LLM thing, when I looked up tutorials and docs, most of them were also using code that didn't work anymore. And the library I needed only worked with a specific version, so I had to upgrade.. but not too far!

Boring, "painful" languages, by contrast, were quite productive for game jams. Except the time when adding a member to a class broke the Emscripten build on the last day, and it took me the entire day to track it down (because why on earth would it be that!)


> I told Martin as much, and he agreed without hesitation that we needed to find a solution that didn't break current users' code. This is not a normal interaction amongst software engineers—a breed infamous for their long, drawn out debates on the most minute of details. However, this is absolutely expected in the Clojure community.

Well that's just slander as far as I'm concerned. Of course we other non-clojure programmers believe in backwards compatibility. What a crazy thing to suggest that we don't.


Maybe I didn't do a good enough job demonstrating how common it is for backwards compatibility to be broken. You're right that many devs value stability—not just clojure devs. But there are also many communities where breaking stuff is normal. This is addressing the latter, obviously.

The premise feel weird to me, I read the graphs much more as evidence of how scared the devs are to make changes rather than how "stable" the libraries are.

You add the code, and rather than change it if needed, you just leave it there and add more code.

You could argue too that Scala is much safer so changes to the code are not scary and it's easier to be stable even under code changes.


Actually, you can't neither read that or the opposite from the graphs: it doesn't if the new code is for new functionalities or if it's to replace (without deleting) some old code.

But you're right: that would be a particularily useful information


I think code retention charts will look similar for any major library in any language. Projects accrete code.

You could instead consider:

* How many major version releases / rewrites happen in this language? (This might be a sign of ecosystem instability.)

* How much new code is replacing old code? (This might imply the language needs more bugfixes.)


Not sure what you mean. The Scala example looks nothing like the Clojure examples.

The retention charts show you how much new code is replacing old code, and you can see the releases/rewrites as the code gets replaced.


The charts are very cool. But they’d be more informative if they tracked _interface_ changes. Maybe Scala is more flexible and the code changes are limited to optimizing the underlying implementation while keeping stable interfaces. It’s impossible to tell from the charts.

> code retention charts

I really liked those charts, I wonder how you can generate them, whether there's a tool out there that you can just feed a Git repo into or something.


pip install git-of-theseus git clone repo cd repo git-of-theseus-analyze . git-of-theseus-stack-plot cohorts.json

> I selected the following libraries off the top of my head with three criteria: all have more than 500 stars and are in active use.

This sentence bothered me way more that it should've, for some reason.


Ah! You caught me in an editing discontinuity.

TLDR.

The outcome is the same, statically typed or dynamically. In both cases one need to perform refactoring in case of breaking changes.


> The outcome is the same, statically typed or dynamically. In both cases one need to perform refactoring in case of breaking changes.

No. In statically typed languages, failures are usually caught in CI. In dynamically typed languages, they end up in production - https://github.com/pypa/setuptools/issues/4519


Maybe that's a bad example, as your build can fail because of a breaking change in a dependency regardless of whether you use a statically typed language.

Also your statement is only partially correct. Breaking changes in dependencies end up in production only if you don't have tests. And I know this is news to many people using static types but in many Ruby shops for example there are test coverages in excess of 90% and at the very least I never approve a PR without happy path tests.


> Breaking changes in dependencies end up in production only if you don't have tests.

That's true. However, you have now replaced the work of a compiler with testing.


Compilers dont test, or rather, they test a very specific and narrow set of things relative to what youd want to test to maintain a working program

Refactoring from a function returning a string to another returning a string, and all compiles, yet without tests nothing works in production because it’s not the same string.

On top of that, sometimes mocking in tests also hide the string breaking change you don’t yet know about.

On top of my head, I saw this happen with a base64 string padded vs unpadded, or emojis making their way through when they did not before, etc.

So yeah, the compiler tells you which pieces of your jigsaw apparently fit together, but tests show you if you get the right picture on the jigsaw at the end (or on some regions).


Isn't it a shame that we're only allowed to have one or the other.

> Breaking changes in dependencies end up in production only if you don't have tests.

Which are opt-in in dynamically typed languages.

You get the same functionality in statically typed languages and it's not opt-in, AND the developer doesn't have to do the work of type-checking (the compiler does it).


I will keep in mind for future projects that you don't need tests in compiled languages because the compiler does it for you.

Will also keep in mind that tests are optional. This is def. a healthy mindset.


> I will keep in mind for future projects that you don't need tests in compiled languages because the compiler does it for you.

who said that?

> Will also keep in mind that tests are optional. This is def. a healthy mindset.

Once again, I have to ask - where did you get that from?


A test suite often ends up acting as an ad-hoc, informally specified, buggy, slow implementation of half a static type system yes.

Leaking bugs, I believe, do not relate to static typing or dynamic typing; it mostly involves deployment. Your types might match, but you would still leak bugs in dependencies :/

From a CI/CD perspective, you should make sure that on updates, things won't break. As others suggest, a maintainable project would have test suites.

Except if you aim to have a program that you will never update again. Write the code once, compile it, and archive it. When you decide to keep that program available to potential clients, be prepared to back up dependencies, the OS it runs on, and everything that makes it operable. If there is a breaking change in the ecosystem of that program, it will break it.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: