Ah, I was wondering what would happen if you're using a type from lib-v2 and an intermediary library passes you that type from lib-v1, and the type has changed internally. Good to know the Rust compiler is set up to catch that.
(I've seen cases where that happens with C and C++ software, and things seem to compile and run... until everything explodes. Fun times.)
I thought this was about loading two incompatible versions of a shared object into the same address space at first :-)
The author correctly contrasts Rust (and NPM's) behavior with that of Python/pip, where only one version per package name is allowed. The Python packaging ecosystem could in theory standardize a form of package name mangling wherein multiple versions could be imported simultaneously (akin to what's currently possible with multiple vendored versions), but that would likely be a significant undertaking given that a lot of applications probably - accidentally - break the indirect relationship and directly import their transitive dependencies.
(The more I work in Python, the more I think that Python's approach is actually a good one: preventing multiple versions of the same package prevents dependency graph spaghetti when every subdependency depends on a slightly different version, and provides a strong incentive to keep public API surfaces small and flexible. But I don't think that was the intention, more of an accidental perk of an otherwise informal approach to packaging.)
> (The more I work in Python, the more I think that Python's approach is actually a good one ...)
I've come to the opposite conclusion. I've "git cloned" several programs in both python and ruby (which has the same behaviour) only to discover that I can't actually install the project's dependencies. The larger your gemfile / requirements.txt is, the more likely this is to happen. All it takes is a couple packages in your tree to update their own dependencies out of sync with one another and you can run into this problem. A build that worked yesterday doesn't work today. Not because anyone made a mistake - but just because you got unlucky. Ugh.
Its a completely unnecessary landmine. Worse yet, new developers (or new teammembers) are very likely to run into this problem as it shows up when you're getting your dev environment setup.
This problem is entirely unnecessary. In (almost) every way, software should treat foo-1.x.x as a totally distinct package from foo-2.x.x. They're mutually incompatible anyway, and semantically the only thing they share is their name. There's no reason both packages can't be loaded into the package namespace at the same time. No reason but the mistakes of shortsighted package management systems.
RAM is cheap. My attention is expensive. Print a warning if you must, and I'll fix it when I feel like it.
I'm not saying this hasn't happened to you, but I'm curious: are you working with scientific Python codebases or similar? I've done Python development off and on for the last ~10 years, and I think I can count the number of times I've had transitive conflicts on a single hand. But I almost never touch scientific/statistical/etc. Python codebases, so I'm curious is this is a discipline/practice concern in different subsets of the ecosystem.
(One of the ways I have seen this happen in this past is people attempting to use multiple requirements sources without synchronizing them or resolving them simultaneously. That's indeed a highway to pain city, and it's why modern Python packaging emphasizes either using a single standard metadata file like pyproject.toml or a fully locked environment specification like a frozen requirements file.)
I've encountered the same problem with Python codebases in the LLM / machine learning space. The requirements.txt files for those projects are full of unversioned dependencies, including Git repositories at some floating ref (such as master/HEAD).
In the easy cases, digging through the PyPI version history to identify the latest version as of some date is enough to get a working install (as far as I can tell -- maybe it's half-broken and I only use the working half?). In the hard cases, it may take an entire day to locate a CI log or contemporary bug report or something that lists out all the installed package versions.
It doesn't help that every Python-based project seems to have its own bespoke packaging system. It's never just pip + requirements.txt, it'll have a Dockerfile with `apt update`, or some weird meta-packaging thing like Conda that adds it own layers of non-determinism. Overall the feeling is that it was only barely holding together on the author's original machine, and getting it to build anywhere else is pure luck.
That’s still the happy case. Once upon a time I spent four days chasing dependencies before reaching out to the original author, who admitted that it hasn’t actually worked in several months but he kept on editing code anyway.
The program depended on fundamentally incompatible sub-dependencies, on different major versions.
I've had similar problems with python packaging in both the web dev and embedded spaces. There are ways to largely solve these issues (use package managers with lock files and do irregular dependency updates), but I rarely see that being done in projects I work in.
If you use gRPC directly and some other library in your stack does it as well it's very likely you end up with conflicts either on gRPC itself or the proto library under the hood.
I don't know, I can see it both ways. I think it depends on programming context. On one hand you're right that it's annoying and a technically unnecessary gotcha, but for some Python use cases it simplifies the mental model to simply Know which version of a particular package is running. For example, pandas and numpy are IMO bad offenders for transitive dependency issues, but it's because they're used as building blocks everywhere and are intended to be used compositionally. It's not uncommon to have to step through a data pipeline to debug it. That would become confusing if it's using 5 different major versions of pandas because each package brought it's own. Or a trip down the call stack involves multiple different versions of numpy at each level.
For web dev and something like requests, it's just not as big of a deal to have a bunch of versions installed. You don't typically use/debug that kind of functionality in a way that would cause confusion. That said, it would be definitely be great sometimes to just be like "pip, I don't care, just make it work".
Yeah; you've gotta pick your poison. Either you sometimes end up with multiple copies of numpy installed, or sometimes pip errors out and will randomly, at the worst possible time, refuse to install the dependencies of your project. Having experienced both problems a lot of times, I'll take the former answer every time thankyou. I never want surprise, blocking errors to appear out of nowhere. Especially when its always the new team members who run into them. Thats horrible DX.
Having multiple copies of numpy installed isn't an emergency. Tidy it up at your leisure. Its pretty rare that you end up with multiple copies of the same library installed in your dependency tree anyway.
As for debugging - well, debugging should still work fine so long as your debugging tools treat foo-1.x.x as if it were a completely distinct package from foo-2.x.x. Nodejs and rust both handle this by installing both versions in separate directories. The stack trace names the full file path, and the debugger uses that to find the relevant file. Simple pimple. It works like a charm.
> For web dev and something like requests, it's just not as big of a deal to have a bunch of versions installed.
I think you've got it backwards. Its much more of a big deal on the web because bundle size matters. You don't want to bundle multiple 150kb timezone databases in your website.
I've also worked on multiple web projects which ran into problems from multiple versions of React being installed. Its a common problem to run into, because a lot of naively written web components directly depend on react. Then when the major version of react changes, you can end up with a webpage trying to render a component tree, where some components are created using foreign versions of react. That causes utter chaos. Thankfully, react detects this automatically and it yells at you in the console when it happens.
Another thing I appreciate about this in the Python world is it avoids an issue I've seen in node a lot, which is people being too clever by a half and pre-emptively adding major version bounds to their library. So foo depends on "bar<9", despite bar 9, 10, 11, 12, 13, and 14 all working with foo's usage of bar.
The end result of this is that you end up with some random library in your stack (4 transitive layers deep because of course it is) holding back stuff like chokadir in a huge chunk of your dep tree for... no real good reason. So you now have several copies of a huge library.
Of course new major versions might break your usage! Minor versions might as well! Patch versions too sometimes! Upper bounds pre-emptively set help mainly in one thing, and that's reducing the number of people who would help "beta-test" new major versions because they don't care enough to pin their own dependencies.
The worst spaghetti comes from hard dependencies on minor versions and revisions.
I will die on the hill that you should only ever specify dependencies on “at least this major-minor (and optionally and rarely revision for a bugfix)” in whatever the syntax is for your preferred language. Excepting of course a known incompatibility with a specific version or range of versions, and/or developers who refuse to get on the semver bandwagon who should collectively be rounded up and yelled at.
In Rust, Cargo makes this super easy: “x.y.z” means “>= x.y.z, < (x+1).0.0”.
It’s fine to ship a generated lock file that locks everything to a fixed, known-good version of all your dependencies. But you should be able to trivially run an update that will bring everything to the latest minor and revision (and alert on newer major versions).
There's a subtle point there though. When you rely on something that was introduced in x.y.z, stating that your version requirement is x.y.0 is an error that can easily cause downstream breakage.
I’m confused. If you rely on a feature introduced in X.y.z why would you specify X.y.0 to begin with (and not just X.y.z)?
In practice, usual rust projects that have not put a ton of work into their dependencies encode X.y.z in Cargo.toml matching the current release at the time they developed the system. So you get at worst an unnecessarily higher version requirement but never a lower one.
Moreover, rust semver would normally imply that new features should only be introduced in X.y releases, so this doesn’t really happen in practice!
It's easy to accidentally ship a minimum version requirement that is out of date when you also consistently use lock files pinned to newer versions. The code may silently depend on something introduced in a newer version pulled in by the lock file.
I have literally never run into this being a problem in practice. If someone downstream ever did notice, they can just specify a higher minimum version constraint.
My point was not about "x.y.0" (I mispoke), but "x.y" (or "0.x") that causes this problem.
Take a look at any random crates's cargo config, and you'll regularly see dependencies specified as "1" or "0.3" instead of "1.0.119" or "0.3.39". If another crate depends on this crate, and has a more precise version needed (say "=1.0.100" (perhaps due to a bug introduced in "1.0.101", but the included library relies on features introduced in some version after "1.0.100", then your library won't compile. Stating your dependency on "1" instead of "1.0.119" is what caused this problem.
This is not hypothetical - I've run into this quite a few times with crates that over-specify their dependencies interacting with crates that under-specify theirs.
Yes, the solution to this is pretty simple - check minimal versions in CI. That's something I do for most of my stuff, but it's not universal.
For fun, you could add this to Python and I think it would it cover a lot of edge cases?
You would need:
A function v_tree_install(spec) which installs a versioned pypi package like “foo=3.2” and all its dependencies in its own tree, rather than in site-packages.
Another pair of functions v_import and v_from_import to wrap importlib with a name, version, and symbols. These functions know how to find the versioned package in its special tree and push that tree to sys.path before starting the import.
To cover the case for when the imported code has dynamic imports you could also wrap any callable code (functions, classes) with a wrapper that also does the sys.push/pop before/after each call.
You then replace third party imports in your code with calls assigning to symbols in your module:
# import foo
foo = v_import(“foo==3.2”)
# from foo import bar, baz as q
bar, q = v_from_import(
“foo>=3.3”,
“bar”,
“baz”,
)
Finally, provide a function (or CLI tool) to statically scan your code looking for v_import and calling v_tree_install ahead of time. Or just let v_import do it.
Edit: …and you’d need to edit the sys.modules cache too, or purge it after each “clever” import?
For static analysis, your type checker could understand what v_import does, how it works, and which symbols will be actually be there at runtime but yes, it’s starting to seem extremely complicated!
What you do with the return_value defines the behaviour you expect from it so to that extent you can rely on that instead of using isinstance:
x: Union[T1, T2] = f()
print(x.foo() ** 12.3)
Perhaps some function could build that Union type for you? It would be a pain to make it by hand if you had 50x different third-party dependencies each pulling in a slightly different requests (but which as far as you are concerned all return some small part of that package that are all compatible.)
If you’re importing a module to use it in some way you’re also declaring some kind of version dependency / compatability on it too, so that’s another thing your static analysis could check for you. That would actually be incredibly useful:
1/ Do your dependencies import an older version of requests than you do?
2/ Does it matter, and why? (eg x.foo() only exists in version 4.5 onwards, but m1 imports 4.4.)
The problem is they are different types, which has huge downstream impacts. Not least of all would be subclassing.
The static analysis you’ve described could work in simple and trivial cases, but the problem you’re now trying to solve is “what concrete type is this object”, and in a dynamic language like Python this can only be fully and 100% determined at runtime.
Mypy and the like do a good job, but often rely on protocols rather than concrete classes. There’s also no impact if the types are wrong or ignored, whereas for this a typing mismatch becomes a subtle runtime issue.
I have though about this a bunch (and have been annoyed by it a bunch).
But the main issue here is somewhat designed around a "scripts and folder of scripts from a package" design principle while such a loading system would fundamentally need to always work in terms of packages. E.g. you wouldn't execute `main.py` but `package:main`. (Through this is already the direction a bunch of tooling moved to, e.g. poetry scripts, some of the WSGI and especially more modern ASGI implementations etc.)
Another issue is that rust can reliable detect type collisions of the same type of two different versions and force you to fix them.
With a lot of struct type annotations on python and tooling like mypy this might be possible (with many limitations) but as of today it in practice likely will not be caught. Sometimes that is what you want (ducktyping happens to work). But for any of the reflection/inspection heavy python library this is a recipe for quite obscure errors
somewhere in not so obvious inspection/auto generation/metaclass related magic code.
Python can't, escept it can
Anyway technically it's possible, you can put a version into __qualname__, and mess with the import system enough to allow imports to be contextual based on the manifest of the module they come from. (Through you probably would not be fully standard conform python, but we are speaking about dynamic patching pythons import system, there is nothing standard about it)
This is great for avoiding conflicts when you try to get your project running.
It sucks when there is a vulnerability in a particular library, and you're trying to track all of the ways in which that vulnerable code is being pulled into your project.
My preference is to force the conflict up front by saying that you can't import conflicting versions. This creates a constant stream of small problems, but avoids really big ones later. However I absolutely understand why a lot of people prefer it the other way around.
will show which dependencies require this particular version of log, and how they are transitively related to the main package. In this case, you would clearly see that the out-of-date dependency comes from package "b".
There are equivalents for must other package managers that take this approach, and I've never found this a problem in practice.
Of course, you still need to know that there's a vulnerability there in the first place, but that's why tools like NPM often integrate with vulnerability scanners so that they can check your dependencies as you install them.
That’s nowhere near as terrible as not being able to resolve a conflict between incompatible versions. Like half of your project can’t use Guava X but another half can’t use Guava Y, and there is no common version that works. We ran into compatibility problems with our big Java project many times and wasted months on attempting things like jar shading or classloaders. At the end of the day we use shading but that comes with its own set of annoyances like increasing the build times and allowing people to occasionally import the wrong version of library (eg. shaded instead of non-shaded). The bigger the project the more likely you’re going to hit this, and the lack of support for feature-gating dependencies in the Java ecosystem doesn’t help.
Go got this right: you want an incompatible version, you have to use a different import path. Then you can only pick one version (which is deterministically the lowest possible version) for a certain import path, not a hundred different versions.
Also forces people to actually take backwards compatibility seriously.
I'm not surprised. Go's design is heavily informed by what does and does not cause cascading design problems in software engineering at scale. These practical concerns are very different from the kinds of issues that academia had been focused on. But practical solutions to practical problems is central to Go's popularity.
No one asked about Go here. And no, it didn’t, it’s the same PITA as in Java or maybe even worse because there are no workarounds like classloading or shading. You have no control over the transitive dependencies. The only thing you can do if there’s a conflict is asking the author to fix one of the conflicting libraries.
If someone talks about a problem I’ll damn well explain other people’s solutions as I please. And no, you resolve conflicts by not having conflicts in the first place.
> And no, you resolve conflicts by not having conflicts in the first place.
That means you can't use library A and unrelated library B together in the same project, even though you can use A alone and can use B alone. That's lack of orthogonality.
No, you seriously discourage libraries from breaking compatibility by removing the possibility to hide behind different pinned versions and version ranges.
I import a@1 and b@1
a@1 transitively depends on c@1
b@1 transitively depends on c@2
Even with different import paths, I still have two different versions of c in my codebase. It'll just be that one of them is imported as "c" and the other will be imported as "c/v2" - but you don't need to worry about that, because that's happening in transitive dependencies that you're not writing.
You still have the same issue of needing to keep track of all the different versions that can exist in your codebase.
It’s c and c/v2, not c@1.0.0, c@1.0.5, c@1.0.10, c@1.1.3, c@2.0.0, c@2.3.1, ... Each necessary because packages in the middle have decided to pin versions or add upper bounds to work around bugs. That’s a huge difference.
FWIW, I've just pulled up a pretty large project I work on using NPM, and almost all of the duplicate dependencies had different major versions. Most of the ones that had the same major version were 0.x dependencies with different minor versions.
So I'm still not convinced that Go's approach is materially different here - certainly in terms of the practical output, NPM does a good job of ensuring that the fewest number of different versions will get installed for each dependency.
You forgot the part where npm people release new major versions for very little reason all the time, because there’s nothing stopping them. Go authors on the other hand are generally really reluctant to change to a new path. Go to a relatively large go codebase and count the v2s. Then do v3.
Coming back to a midsized JavaScript codebase after a few months and trying to upgrade to new major versions of things have always been a shitshow.
Backwards compatibility is more difficult in Rust for many reasons. For example, you can't add a new item to an enum without creating missing-case errors everywhere it is used.
That's a true effect, although I'd question whether it makes it harder or easier for things to be backwards compatible. I use Rust because I trust it to throw up a bunch of errors when I make changes; if I handle all the cases of an enum somewhere, and suddenly there's a new enum variant, the answer is probably that I need to handle the new variant there too.
Adding an enum variant can be backwards incompatible just as easily in languages that don't do this, you just don't get to see the error at compile time.
Right. This even tidily forbids the exhaustive matching for other people's code (ie requires them to write a default) using your crate but still allows it within the crate, reasoning that you should know which values you added even if you never promise an exhaustive list to your users.
How does this work? Assume that the log crate in its internal state has a lock it uses for synchronizing writing to some log endpoint. If I have two versions of log in my process then they must have two copies of their internal state. So they both point to the same log endpoint, but they have one mutex each? That means it "works" at compile time but fails at runtime? That's the worst kind of "works!"
Or if I depend transitively on two versions of a library (e.g. a matrix math lib) through A and B and try to read a value from A and send it into B. Then presumably due to type namespacing that will fail at compile time?
So the options when using incompatible dependencies are a) it compiles, but fails at runtime, b) it doesn't compile, or c) it compiles and works at runtime?
If the log endpoint is external to your process and two different copies of the logging crate in the same process writing to it cause problems, two identical copies of the logging crate in different processes will likely also cause problems. The solution here is global synchronzation, not just within one process.
If the log endpoint is internal to your process, how did you end up with two independent mutexes guarding (or not guarding) access to the same resource? It should be wrapped in a shared mutex as soon as you create it, and before passing it to the different versions of the logging crate. And unless you use unsafe, Rust's ownership model forces you to do that, because it forbids having two overlapping mutable references at the same time.
Perhaps a log wasn't the best example due to how the resource (a log sink) is often external. Take some simpler example: a counter (such as sequential ID generator).
It's an in memory counter doing an atomic increment that returns the next ID. Two of my projects in depend on it when they create new items. Both want to generate process wide unique IDs. But if they depend on two versions of the crate then there would be two memory locations, and thus two sequences of IDs generated, so two of the frogs in my game will risk having the same ID?
There is no sharing problem here, the problem is the opposite: that there are two memory locations instead of one?
For the counter example, with what I assume was an edit change about frogs, each of the two distinct counter versions produces, as it promised, unique IDs.
If the frog numbering code is using either counter, it gets unique IDs. The problem only arises if somehow it's using both of them. But why would we expect the "unique" IDs from two different pieces of software be guaranteed never to collide? Imagine instead of counter 1.2.3 and counter 4.5.6 we've got bob_counter and cathy_counter, are you still astonished that they might give out the same IDs? Neither mentions the other, perhaps they're the same code, copy-pasted by egomaniacs.
> are you still astonished that they might give out the same IDs?
No I think two sequences is exactly what's asked for. What I'm wondering is whether this is a warning when cargo fails to restore the crates and must use incompatible versions. I'd only be surprised if the behavior was a clean compile + a runtime crash. Because that's usually not the design chosen in Rust.
> If the frog numbering code is using either counter, it gets unique IDs
I guess this is the issue: what I'm imagining is that the counter crate has a static mutable state. That's not a problem with atomics, but it's an antipattern (And especially so in Rust). The solution of course is to NOT use static mutable state at all. Whoever wants to create either a Frog or a Wizard must pass in a universe, from which it can grab the next ID, so it's a counter instance rather than get_next_static_id().
My question isn't "do we really get two memory locations" (of course we do) my question is: how afraid should one be about this, i.e. what is the behavior of the compiler/cargo or other linters when it comes to warnings etc? Are there any non-contrived scenarios where bumping a version of a dependency causes a problem at runtime (only)?
Sorry yeah I mean "resolve the dependency versions" from the listed version requirements, and if needed downloading them.
"Restore" is used by e.g. NuGet (.NET) as you suggest and others (npm for js, etc).
The cargo dependency resolver runs and the results can be viewed by cargo tree. I saw in the docs now that cargo tree -d can be used to help find incompatible packages.
The idiomatic Rust way to avoid a scenario where we might have different versions of the unique IDs is to reify unique IDs as an actual type (say named UniqueID) rather than just having a tacit understanding that these particular integers are supposed to be unique.
This lets the type's owner decide what affordances to give it (probably adding two UniqueIDs together is nonsense, but comparing them for equality definitely makes sense - should they be Ordered... chronologically? Or not?)
But importantly here now that it's a type, Rust knows counter v 1.2.3 UniqueID and counter v 4.5.6 UniqueID are different types, even though they have the same spelling. So this code now won't compile unless everybody is consistent e.g. the Frogs and Wizards all use v 1.2.3 but the unrelated Space Combat module works with v 4.5.6 UniqueID. Code that looks like it could work, where we try to use the unique ID of a Wizard as the identify of a sub-space laser blaster won't compile.
That safety is merely accidental, it only appears if the dangerous state (the atomic counter memory location) is in the same crate as the UniqueNumber type exposed.
A pair of crates (one which defines the struct UniqueNumber(u32), and one which produces sequences of them) would still be suffering from this, because the sequencing crate could use v1 of the unique number struct crate, in both its v1 and v2.
This of course is even more contrived (now we have a specific split of state and types instead). So the real question is: does this happen in non-contrived scenarios in the wild?
I feel like the separate crate with UniqueNumber(u32) is a bad idea. There's nothing unique about these, you're gas lighting me. Like a PrimeNumber(u32) that's just a thin wrapper and expects somebody else to ensure they're prime - no, I know the compiler doesn't understand English, but these aren't prime (or unique) numbers, you're bad at your job.
One of my favourite decisions about Rust which could have gone the other way was their choice to define &str (and String) to be actual text. &str is basically &[u8] but with an extra requirement that this is valid UTF-8 text, likewise then String is basically Vec<u8> but requiring UTF-8.
I feel like UniqueNumber is the same, if you call yourself UniqueNumber don't just be a thin wrapper around an integer, you need to own that uniqueness problem.
FYI, Python can/did support multiple versions via buildout (http://www.buildout.org/en/latest/) but it's complicated and wide-scale support has probably bit-rotted away.
Their internal states in Rust are also namespaced, so two incompatible crates in the same process won't observe each others symbols. If they access external resources that are not namespaced though, that could be a problem.
"The same heap" isn't a coherent concept here. Your malloc-implementing memory allocator has some global state, and that state has some pointers to some addresses it got from mmap and some metadata about how long those spans of memory are, which parts are unused, and how long the values it has returned from malloc previously are. If you managed to use two of these, they would each contain data referring to different non-overlapping sets of memory mappings. If you accidentally used a pointer from one with the other, you would go instantly to C UB land:
The free() function frees the memory space pointed to by ptr, which must have been returned by a previous call to malloc(), calloc() or realloc(). Otherwise, or if free(ptr) has already been called before, undefined behavior occurs. If ptr is NULL, no operation is performed.
I can't speak to multiple users of sbrk, I assume that would fail. That's a property of sbrk though, not of malloc (or memory allocation in general); on Windows, you can have as many implementors of malloc as you want, so long as malloc/free calls happen to the same implementor. Each malloc implementor just asks for anonymous backing pages for their own heaps (via VirtualAlloc/mmap).
That's not a very clear example. You don't need to be using multiple versions of the same dependency to contend on access to stderr/out, just having a println in your code along with logging code will have the same effect.
I haven't ever observed a problem of concurrent access to stdout/err though, I expect because the methods for accessing stdout/err lock them for the duration of their printing. If you Google for "Rust print console slow", you'll probably find advice to explicitly lock it, to avoid individual printlns from each acquiring the lock.
I'm not sure I understand the use case here. Are you asking if you can depend on two versions of the same crate, for a crate that exports a `#[no_mangle]` or `#[export_name]` function?
I guess you could slap a `#[used]` attribute on your exported functions, and use their mangled name to call them with dlopen, but that would be unwieldy and guessing the disambiguator used by the compiler error prone to impossible.
Other than that, you cannot. What you can do is define the `#[no_mangle]` or `#[export_name]` function at the top-level of your shared library. It makes sense to have a single crate bear the responsibility of exporting the interface of your shared library.
I wish Rust would enforce that, but the shared library story in Rust is subpar.
Fortunately it never actually comes into play, as the ecosystem relies on static linking
> I'm not sure I understand the use case here. Are you asking if you can depend on two versions of the same crate, for a crate that exports a `#[no_mangle]` or `#[export_name]` function?
Yes, exactly.
> Other than that, you cannot.
so, to the question "Can a Rust binary use incompatible versions of the same library?", then the answer is definitely "no". It's not yes if it cannot cover one of the most basic use cases when making OS-native software.
To be clear: no language targeting OS-native dynamic libraries can solve this, the problem is in how PE and ELF works.
I agree the answer is no in the abstract, but it is not very useful in practice.
Nobody writing Rust needs to cover this "basic use case" you're referring to, so it is the same as people saying "unsafe exists so Rust is no safer than C++". In theory that's true, in practice in 18 months, 604 commits and 60,008 LoC I wrote `unsafe` exactly twice. Once for memory mapping something, once for skipping UTF-8 validation that I'd just done before (I guess I should have benchmarked that one as it is probably premature).
In practice when developing Rust software at a certain scale you will mix and match incompatible library versions in your project, and it will not be an issue. Our project has 44 dependencies with conflicting versions, one of which appears in 4 incompatible versions, and it compiles and runs perfectly fine. In other languages I used (C++, python), this exact same thing has been a problem, and it is not in Rust. This is what the article is referring to
I cannot shake the feeling that this is actually a misfeature that will get people into trouble in new and puzzling ways. The isolated classloaders in Java and the assembly domains in .Net didn't turn out to be very bright ideas and from a software design perspective this is virtually identical.