Compiling a complete language and verifying correctness is undecidable but that's not necessarily what Rust does.
The complier can still fail to compile/verify otherwise correct parts of a program. Because it's only operating on a subset of the entire possible language and it's not verifying all properties of the language, it's not quite undecidable but it is still very much NP-hard.
Languages like C, C++, or Ada on the other hand take the other approach. They compile all parts of the language but make little to no attempt to enforce correctness on all those parts. You see verifying compilers for those languages only allowing a subset that they can actually verify which is the same as Rust.
At the moment Rust the language is what the main Rust compiler says it is but once there are meaningfully different compilers you'll start to notice the issue a bit more and there will likely be parts of the language that one compiler can verify but the other can't (and therefore fail to compile).
> [Rust is] not quite undecidable but it is still very much NP-hard.
> Languages like C, C++, or Ada ... make little to no attempt to enforce correctness
Both statements are laughably false, but for different reasons.
Both are based on the Doctrine of Rust Exceptionalism, which requires that Rust is not just a variation on a theme, but fundamentally different from other languages. Like most doctrines, this one is indefensible.
Rust compilation, like C++ and Haskell, is undecideable, not just NP-hard. Non-terminating compilation is artificially curtailed, thus failing to produce an open set of what would have been correct programs.
The Rust compiler performs certain correctness checks that other languages do not. It rejects an infinite set of correct programs that its correctness checks cannot resolve. In this as in all other particulars, Rust is very much on the continuum with other languages. All strongly-typed languages perform a huge variety of correctness checks, some built-in, others programmed, with great success, rejecting the large majority of incorrect programs people write. As a consequence, it is very common for Rust, C++, and Haskell programs to run correctly the first time, once the compiler is satisified.
I don't believe in any of that Rust exceptionalism BS. Rust is just a strongly typed language just like Haskell or any other strongly typed languages. This strong typing (particularly through the borrow checker) gives the Rust compiler the ability to verify properties about the code like any other strongly typed language. The mistake that Rust makes is that they attempt to make the verification a language requirement while nearly every other language either makes it optional or keeps the verification in a library or tool external to the standard.
With Rust though a significant part of the value in things like the borrow checker is that it has the assertion that if you break the rules set out by the "standard", the code doesn't compile. Even Haskell doesn't make the mistake of trying to enforce something like that (instead it's libraries that normally add that verification but those aren't part of the standard).
My point is that Rust has language requirements that force the language compilable by the compilers to never be close to or equally covering the entire language defined by the "standard".
There will be normal (and emphasis on normal) programs that will not be compilable in Rust on certain compilers but will work just fine on others due to the search space on the type solver or the borrow checker blowing up. These issues will also likely pop up in between versions of the same compiler when changes to the internal representation cause certain cases to fall outside the search space.
The only thing the Rust devs can actually do to solve this problem is to add arbitrary restrictions to the language spec based on the limitations of existing compilers.
This issue isn't unique to Rust but basically every other major language has gotten around the problem by not making it a problem in the first place.
---
This comes back to the point in my original comment. Rust the language based on the compiler will likely not be consistent within Rust the language based on the "spec" (whatever it ends up being). There will be normal code that fails to compile due to seemingly arbitrary changes triggering combinatorial explosions.
With C, C++, Haskell, etc this issue can pop up but the features that can invoke this problem are far more rare, are more self contained, and are easier to diagnose (i.e. template explosions which are localised vs the borrow checker which can be very much not localised).
The original comment was very much a critique of Rust and by no means some type of Rust exceptionalism.
Any hypothetical standards-conformant Rust implementation would have to be as or more restrictive than rustc, not less. Something like mrustc, which performs no borrow checking, would not be allowed to call itself a conformant Rust implementation.
(I say hypothetical because until there's a spec, it's simply not feasible to build a conformant alternative implementation, as you'd need to be bug-for-bug compatible with rustc.)
This is also not correct. As in, completely wrong.
Presuming a formal spec that rustc happens to enforce, another compiler could simply try longer to resolve types before giving up, admitting some set of programs that rustc gives up on.
The spec. could specify exactly the rules rustc actually has including iteration limits, this would freeze rustc (and so it would be undesirable) but it is an option.
The thing we're talking about here has changed after Rust 1.0, Rust 1.0 shipped with a rule that said if you match integers you have to provide a default. In lots of code that feels natural. But not everywhere. If you match all possible integers (e.g. for a i8 that's from -128 to 127), the compiler would say "Not good enough" until you write the default match it will never actually need.
That was fixed with feature(exhaustive_integer_patterns) in 2018 or so AIUI.
But you could imagine Rust being standardised with the old behaviour and, even though it's clearly possible to write a compiler which implements feature(exhaustive_integer_patterns) that would then be non-standard because Rust programs lacking the default match for integers are forbidden in the standard.
The type system is also Turing-complete, therefore non-terminating. It would be arbitrarily hard to write a spec in a non-Turing-complete language such that all implementations would admit the same set of programs, or even (responsive to the original claim) reliably only a smaller set.
The complier can still fail to compile/verify otherwise correct parts of a program. Because it's only operating on a subset of the entire possible language and it's not verifying all properties of the language, it's not quite undecidable but it is still very much NP-hard.
Languages like C, C++, or Ada on the other hand take the other approach. They compile all parts of the language but make little to no attempt to enforce correctness on all those parts. You see verifying compilers for those languages only allowing a subset that they can actually verify which is the same as Rust.
At the moment Rust the language is what the main Rust compiler says it is but once there are meaningfully different compilers you'll start to notice the issue a bit more and there will likely be parts of the language that one compiler can verify but the other can't (and therefore fail to compile).