Personally, I prefer languages that make small incremental breaking changes, as it prevents cruft from accumulating over time.
The important thing is to provide a migration path (e.g., begin by marking something as deprecated). Then provide refactoring tools to help migrate or interface with legacy code.
I think C++ would have been much better off had it been willing to do this. Paraphrasing Stroustrup: C++11 has within it a smaller more elegant language that's trying to break free.
Being able to run old legacy code is one of the most important arguments for C++. Every new feature is add-on (not "upgrade" and not "migration").
At the fast pace things develop for Rust I hope that the language will have a good way to handle Rust 2.0 code together with Rust 1.x code. IMHO providing practical tools for an upgrade path is something that should be thought of before releasing 1.0.
Objective-C is a good example of successfully migrating the language forward. Breaking changes were introduced slowly. Library APIs are marked as deprecated several releases prior to removal. The compiler warns you of their use. Automatic reference counting can be turned off for legacy files, but a tool is provided that will help you convert a class to use ARC.
Code written in ObjC a couple of years ago will likely not run today without modifications. But overall this has been a net positive for the language. It's actually become really pleasant to use. And is in part why Apple's platform has thrived.
I think optimizing for the past is the wrong thing to do. If your language is successful, then most of the code is yet to be written. So if your choice is between making future code easy to write, or not breaking legacy code, then you should err on breaking legacy code and provide deprecation/migration tools.
The concern isn't that the pace of breaking changes will mean Rust breaks sem-versioning but that the duration between Rust 1x and 2x will be short. In practice, it doesn't make a difference what number is assigned to a release if the major stability turns out to be short.
Can you comment on that -- possibly entirely incorrect -- concern I have about Rust's development? Are we going to see Rust 2.x popping up in six months?
We have no current timeline for a 2.0. It certainly will not be on the order of months, I would prefer on the order of a decade, I'd bet on the order of years.
Furthermore, and this is speculation, since again, we haven't talked about it as a group, but I wouldn't imagine a Rust 2.0 where it's like breaking changes are today. I would imagine a very long period of deprecations, a nice upgrade path, and all that jazz. Nobody likes when the entire world changes out from under them all at once, and sudden, massive changes are something we're trying to avoid with the release train model.
Will it really cease making all breaking changes? In the past, the Rust team has been open to the possibility of making changes that technically break backwards compatibility, but that they view as being unlikely to cause many problems in the real world. This is still different than ceasing breaking changes in the sense of SemVer.
Compatibility and stability is always somewhat fuzzy. For example, strictly speaking in JavaScript, it's basically impossible to make any semver-compatible change to any library; adding a method "foo" could break anybody who was counting on monkey-patching in a method called "foo". But people still get a lot of mileage out of semver in the node.js/io.js community as a promise that we intend not to break code. Likewise, in Rust we might break some subtle details of behavior that we explicitly left unspecified, such as type inference or heap layout, but we'll do our best to avoid breaking real-world uses of the language or libraries.