Core infrastructure software like database kernels and protocol stacks should be implemented at least in part to high assurance standards. I've verified parts of database engines and other critical code many times with good effect, finding subtle bugs we never would have discovered in the wild and with (as expected) no defects discovered later.
Most systems programming languages don't make it simple and many people don't do it but it is definitely valuable and worth doing when the economics make sense.
So you end up stuck with languages like Ada, where the language can prove the correctness of your code (or rather, that your code follows the specification)?
Currently modern C++ plus a ton of specialized tooling that covers much of the ground of Ada, just not built into the language. It is a balancing act to keep development from becoming unwieldy and the 80/20 rule applies. Code that is easy to verify also tends to be slow, and that is not a tradeoff you can make for some applications. No one is running something as complex as a database kernel through an end-to-end theorem prover. Design verification scales well (model checkers and similar), implementation not so much due to practical limits on what you can prove and accumulated complexity/bugs in the specification, and verification of code generation is very limited (I use the LLVM stack). Nonetheless, this gets you to a very low defect rate and it isn't like this code is being written from scratch every time.
Even with a fully verified toolchain there will still be bugs. I once had a customer find a rare bug in a database engine that was ultimately caused by slight differences in behavior between microarchitectures running the same binary.
Seems like I recently read that the Ada folks might want to borrow some concepts from Rust. To me that says both languages are working toward similar goals.