> the amount of parallelism they were able to use, which was impractical in C++
You mean after not using OpenMP, basic multithreading core guidelines, and not doing the refactoring with sanity checkup in TLA+, tools that end up on the frontpage everytime people rediscover them.
Do you believe that "just applying TLA+ to the rendering code in Firefox" is a useful thing to recommend to people? TLA+ cannot scale to that size of a codebase without such dramatic simplifications that it's not going to catch any of these kinds of bugs. This is the entire reason why proving safe encapsulation of `unsafe` blocks is possible was so important, it lets you focus the proof engineering effort on a much tinier part of the program. Sure, it can't prove the kind of global invariants TLA+ aims for, but you can also actually apply it to a very large codebase like Firefox (and keep it in sync--another problem you need to deal with when using tools like TLA+).
Out of curiosity, have you used OpenMP or TLA+? I seem to remember that solve problems that are really, really different from the use cases for which Rust is designed.
Also, I'll throw out there that TLA+ is nearly as handy for Rust as it is for C++. Safe Rust will keep you from data races that break it's safety guarantees, but there's more to correct use of concurrent algorithms than that. TLA+ is a great tool for modelling those other concerns if you need to apply a little more engineering rigor to some of those problems.
You mean after not using OpenMP, basic multithreading core guidelines, and not doing the refactoring with sanity checkup in TLA+, tools that end up on the frontpage everytime people rediscover them.