If my analogy is is flawed (which it almost certainly is), yours is even more so: the computing equivalent of your analogy is preventing runtime errors by simply not allowing software components to interact - yes, of course, without interaction there are no errors because there is no computation!
I was thinking about that as I submitted it but was multitasking. You called it so I gotta correct it. So, let's drop the analogy and go back to what I originally claimed: CPU's modified to protect pointers, arrays, stacks, and so on. The primitives forced to be used in acceptable ways. The programmer does the rest expressing the problem in the type-safe HLL.
Now, almost every hack on a system that I can think of requires forcing a pointer to go out-of-bounds or something like that. Further, the rest send in data that becomes code. One check, from Burroughs, is CPU looking for code tag bit before executing which only can be set by OS or isolated service on microkernel. So, that wouldn't work.
What remains, with little performance hit & no static checking, is a system where hackers (a) have to con the admin into installing malware, (b) break the minimal, trusted loader/installer w/out abusing above components, or (c) get a denial of service attack. Forget analogies: the reality is much more impressive given there's almost no high-severity CVE's left. You can also do great static checks and such as I always recommend. Yet, you either don't or rarely need them in practice if goal is integrity or confidentiality rather than availability.
"Check and incidentally... mate" (Sherlock Holmes, Game of Shadows)
> Yet, you either don't or rarely need them in practice if goal is integrity or confidentiality rather than availability.
My goal is correctness. A program that fails with a runtime exception is just as wrong as another that silently corrupts your data. Dijkstra put it very nicely:
“We could, for instance, begin with cleaning up our language by no longer calling a bug a bug but by calling it an error. (...) The nice thing of this simple change of vocabulary is that it has such a profound effect: while, before, a program with only one bug used to be "almost correct", afterwards a program with an error is just "wrong" (because in error).”
Cute quote by Dijkstra that shows he hadn't quite figured out reality yet. So, correctness is your goal. That means you'll have to specify the behavior, safety/security policy, and prove the two equivalent. The implementation, both source and binary, will have to be shown equivalent along with proven free of language-level issues. Finally, you have to do that on triple, modular hardware [1] that's rad-hard w/ similar rigor in its lifecycle. Or use run-time checks [2] for each algorithm that can correct errors probably also on TMR or rad-hard board. Let's not forget the custom circuitry for RAM that works [3].
Darn, Dijkstra or not, you still need some kind of runtime checks and protection for correctness. :)