> If you can’t define the operational semantics of a system then you can’t rigorously convince yourself or anyone else that it is “secure.”
Well. This means no one can provide any security guarantee for any remotely realistic system because no modern stack has a completely understood&formalized operational semantics. But I definitely know that some systems are much more secure than others! So this seems like a situation where perfect is very much the enemy of "1000x better than it could've been".
Furthermore, formalization is "above and beyond" best practice, so you're unlikely to be accused of negligence if you do "better than best practice but still not formalized".
Finally, the original claim is not true. There are many systems with non-operational semantics that are useful for proving security properties. And sometimes no semantics is needed at all for large swaths of the system. Sandboxing is an excellent example of the latter. For some very reasonable attacker models, you need an operational semantics for the sandbox but don't need an operational semantics for whatever's running inside the sandbox in order to provide fairly strong security guarantees.
There are realistic systems that are operationally secure by the given standard. Nuclear launch systems are the obvious example.
“Best practice” is a euphemism for whatever it is the majority does. Nobody ever got fired for buying IBM. That’s not even reasoning.
Finally, your paragraph about sandboxing is ill considered, but not that wrong. First all programs have operational semantics, the only question is how well understood they are. Second sandboxing is a constraint that is easily formally expressed by way of the logical consequence. I’m surprised you’re taking issue with that since you give a great example thereof.
> There are realistic systems that are operationally secure by the given standard. Nuclear launch systems are the obvious example.
AFAIK there aren't really any examples of production systems that have meaningful, formally verified security properties without gaping holes.
Do you have a good paper about the full stack formalization/verification of a nuclear launch system?
> “Best practice” is a euphemism for whatever it is the majority does. Nobody ever got fired for buying IBM. That’s not even reasoning.
1. Companies are sometimes sued, successfully, for not following "best practices". And on technical merits. Ex, Toyota lawsuits.
2. It most definitely is reasoning! Here's the cartoon derivation: "If I buy IBM I do not get fired. Therefore, I will buy IBM". This is reasoning about the social system, not the technical system. But then, security is both a technical problem and a social problem. $Billions on formal verification can't stop the simplest of phishing attacks.
> First all programs have operational semantics, the only question is how well understood they are.
Well... I guess technically. But that's not typically what people mean when they say something "has a(n operational) semantics".
Typically when people say "X has an operational semantics" they mean "someone has actually tex'd/coq'd/pencil'd the transition rules and maybe proved things about them".
Example: If I implement a programming language without doing any theory and you ask if I have an operational semantics, the only non-confusing answer is "no" or perhaps "the semantics is defined by the implementation of the compiler", which is just a tongue-in-cheek way of saying "no". This doesn't mean that no operational semantics exists, it just means I haven't written it down in the form of transition rules or a coq file or whatever.
If I give my language a denotational semantics and you ask if I have an operational semantics, I'll say "no". But again, that doesn't mean that the operational semantics don't exist. It just means I haven't written them down and proven a correspdonence.
> I’m surprised you’re taking issue with that since you give a great example thereof.
I think you missed the fundamental moral of the sandboxing example: perfect is the enemy of good enough.
Sandboxes allow for security guarantees without formalizing every aspect of the system. In fact, I conjecture that these sorts of "don't try to formalize everything" approaches toward formal security guarantees are really the only ones that scale.
If you try to formalize everything, you'll drown. If you strategically concede defeat and admit that some parts of the system aren't possible to formalize, you can get a lot of strong guarantees. As long as the concessions are strategic. Ex, sandboxing whenever the permissions interface is simple but the implementation is complex and the failure modes permit sandboxing.
Or, to put that observation in a pithy phrase: "perfect is the enemy of good enough."
> Do you have a good paper about the full stack formalization/verification of a nuclear launch system?
Bwahahaha you're funny.
> If you try to formalize everything, you'll drown. If you strategically concede defeat and admit that some parts of the system aren't possible to formalize, you can get a lot of strong guarantees.
You're contradicting yourself again. There's a reason why weakness is a strength in formalisms.
No, I'm curious and would use that citation! Even just a paper saying "this has been done" without any details? I have a hard time believing this has been done, and people are allowed to talk about it on online forums, but there isn't a single citation even acknowleding its existence.
> You're contradicting yourself again. There's a reason why weakness is a strength in formalisms.
I'm not sure what this means, but it sounds a lot like what I've been saying all along?
Well. This means no one can provide any security guarantee for any remotely realistic system because no modern stack has a completely understood&formalized operational semantics. But I definitely know that some systems are much more secure than others! So this seems like a situation where perfect is very much the enemy of "1000x better than it could've been".
Furthermore, formalization is "above and beyond" best practice, so you're unlikely to be accused of negligence if you do "better than best practice but still not formalized".
Finally, the original claim is not true. There are many systems with non-operational semantics that are useful for proving security properties. And sometimes no semantics is needed at all for large swaths of the system. Sandboxing is an excellent example of the latter. For some very reasonable attacker models, you need an operational semantics for the sandbox but don't need an operational semantics for whatever's running inside the sandbox in order to provide fairly strong security guarantees.