Hacker News new | past | comments | ask | show | jobs | submit login

So is it just writing a bunch of preconditions for a method? This was practice long before "formal methods" became a "trend".

I still haven't had anyone show me how this works.

If you, or anyone reading, could do the following I think it would really help all of us plebs who aren't much involved with academic computer science: can you write, from start to finish, a formally defined application documenting as you go and why you are doing it. For instance an F to C calculator. Could you make one of those and document your thoughts as you create it.




It's more like writing preconditions and postconditions, and proving that the implementation of the function will always satisfying the postconditions. And then also proving that no other part of your program can call the function in a way that violates its preconditions. The proofs are checked automatically.

I can't point you to a simple example. But you could check out the book "Software Foundations" (available for free at https://www.cis.upenn.edu/~bcpierce/sf/current/index.html). If you have time to read the first few chapters it may clarify things.


So then how is it any different from what most people already do? We already use assets liberally in development, specify everything in documentation, and then use linting systems to tell us if we violate our rules?

This just sounds like another name used by people to sound like they are doing something that was already being done in the professional.

Edit: I'll read through that book, thanks!


In how much of the code are you actually applying those best practices?

Are you verifying interactions between all modules?

True formal verification checks everything, because anything unchecked shows up clearly as a baseless assumption (axiom).


Here [0] is a concrete example of using a formal verification system ontop of C (and the blog contains many other examples) along with an excellent dialog around the example.

First a caveat, this formal verification system is rather simple, and only proves code within the domain of the C standard. Using such a system with a buggy C compiler (pretty much all of them) or the result on subtly incorrect hardware (like most modern hardware) will still allow for the formal verification to be wrong. But it is an order of magnitude better than writing out pre-post conditions in comments, fuzzing, or automated tests. And slightly better than pure functions:

* Pre-post conditions are code and actually enforced at compile time.

* All possible cases are tested for correct behavior, no need to stochastically test ranges (fuzzing) when the entire range is proven or specific test cases (unit tests) when every corner case has been proven.

* Pure functions can't deal with state. Even monads still require a trap door, this system can still prove through those trap doors rather effectively.

Now for the example in question:

Note how the formal verification system points out a problem when the pointers match, could a linter notice that? Maybe, but probably not, at best it would be a "potential problem with equal pointers", it would have to actually understand the source code, and the intent of the programmer, like a formal verification system does, to know whether it's an actual problem. The formal verification system knows it's a problem because it deduced it from the rules of the C standard and the code.

So coding standards ("Always check for equal pointers when expecting different ones." or "Never call this function with equal pointers.") are a decent workaround. But an order of magnitude better is: "We've proven the code never violates our expectations." Accidentally violating those expectations is the purpose of the coding standards in the first place; we've fixed the root problem so the decent workaround is no longer required.

[0] https://critical.eschertech.com/2010/06/22/aliasing-and-how-...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: