> Between hardware flaws like rowhammer, meltdown, spectre, etc. it's not possible to say that a piece of code isn't going to trigger something intentionally or not.
I guess I was thinking software vulnerabilities specifically (although in the context of this article I should have thought about the hardware side more). Or to be even more specific, flaws that are from that program in particular.
You can formally prove that a program is a refinement of a specification, or that a subroutine always obeys certain invariants (when executed honestly.) See Dafny and ADA-SPARK for examples.
It reduces to theorem proving with a SAT solver like Z3, if your language is Turing-complete and your invariants are abitrarily general. SAT solvers are fast and powerful these days, but the usual issues are:
1. It's too cumbersome to define invariants or write the type of specs that can check program correctness.
2. Things get exponential as state space increases.
Modern checked languages are finally addressing #1, though they will sometimes have blind spots (e.g. Dafny can't reason about concurrency, TLA+ (which can) is very abstract and high level.)
Writing programs as bundles of isolated, formally verified modules, then proving the correctness of the program with a high-level spec that assumes module correctness, lets you scale to real, large programs. This is how Dafny works, for example -- functions declare pre and post-conditions, and so Dafny can throw away its reasoning about callee function's internals when checking a caller.
This strategy really is overlooked! It's powerful enough that you can eliminate memory/argument checks at compile time, ditch all your unit tests and KNOW your program implements a spec. It should be mandatory for all safety-critical systems that can't rely on hardware fail-safes (e.g. airplanes.)
Software side there's a lot that can be proven and said about small sections of programs. I think a sibling commenter talked about an ISO standard related to that (And I think it covers some hardware bits). My layman's understanding of it is that it's a way to specify as many assumptions about how the code will be run, where, and what side effects it will have and no others. That makes for a really nice set of assertions but the end effect is that you can't say anything about certain kinds of programs; i.e. This program is safe, That program can't be safe, and then these programs are unknowable. Godel should never have been allowed to publish anything.
I guess I was thinking software vulnerabilities specifically (although in the context of this article I should have thought about the hardware side more). Or to be even more specific, flaws that are from that program in particular.