Intel's work is great. The hardware industry in general is light-years ahead of software industry on applied formal methods. The reason is straightforward: bad circuits can't be fixed with an update. That recall made Intel and others look at the balance sheet a second time with nice results. :)
I saw the TLA+ talk [1] recently and was surprised there was no discussion of code-generation.
Once you've spent a significant effort in writing and verifying a formal spec for a program, there should be an automated mapping between it and the implementation, or at least a template of such implementation.
There is a difference between formal specification and formal verification. Formal specification is what we have with TLA+: you create a high-level description of your system, to which you apply software tools to check the design. It's like blueprints for software.
Formal verification deals with proving that the actual code you write implements a formal specification. This is a much, much harder problem. Usually, it involves cajoling a theorem prover by adding lots of annotations to your code. There's been some impressive work with the Ironclad project from MSR[1] which brought this down to ~5 lines of annotation per single line of implementation code - within striking distance of unit testing! Still, probably a decade or more away from widespread use.
Formal verification isn't quite what you meant, though. You were talking about code generation. For general systems specified by TLA+, this isn't going to be useful unless you write in a very idiomatic way; in which case TLA+ is just a very high-level programming language. However, code generation is implemented in some specification languages that deal with very specific areas of system design. I'm thinking primarily of the P language[2], which is used to specify & check state machines and includes code generation capabilities for C++ and C#.
Code generation is a separate issue. Things you can look into are Fowler's old Perfect Developer toolkit, SCADE toolkit, model-driven development in Rockwell-Collins SHADE program, iMatix's DSL's for various things, certified compilers/optimizers for functional languages such as FLINT ML, and CompCert C compiler for low-level correspondence. Also, high-level metaprogramming systems such as Racket give you the ability to transform things in many ways. The automated programming field will have things for you. My scheme was to use a knowledge-based programming approach leveraging a combination of the above work to generate certified code from functional specifications. Haven't built it yet but would make a practical Ph.D.
Hope you find some of the above enjoyable or helpful on your journey to end-to-end, verified software.
Amazon have used it to verify algorithms in several of the core distributed systems they run apparently. There was an article in the communications of the ACM recently about it, can't remember the reference offhand.
You two beat me to it haha. It was very exciting for me to see a company such as Amazon using a tool such as TLA+. I liked their careful, evidence-based approach to exploring and using the tool. Hopefully, a mainstream company using such tools will lead to a lot more interesting in using them.
I remember this from a Leslie Lamport talk posted also on HN a few months ago. Is anyone using this for specifications, prototyping or anything in between in real life?
One of my coworkers fleshed out an important and complex component that absolutely had to be logically correct, by using TLA+ modelling.
It was the first time he'd used TLA+ (or TLA) at all, and it took a bit of experimenting to get used to it, but he soon generated a solid logical model and exposed a number of fringe flaws in the original proposed solution.
From TLA+ model to completed Java application took very little time at all, the logic was all there already, it just needed fleshed out in the language. It was also easy to split the work out amongst multiple programmers. He's argued that the total time, including learning TLA+, took less time than writing the application from scratch in Java and discovering the bugs as they went along.
About the only thing he disliked with it was that almost all (if not all) the tooling around it require Eclipse, and he hates that IDE with an almost unholy passion :D
> One of my coworkers fleshed out an important and complex component that absolutely had to be logically correct, by using TLA+ modelling.
This is nothing to do with you or your statement, but this is the pre-eminent issue in our industry. Everything absolutely has to be logically correct or we expose users to flaws in both security and stability that can in extreme cases be the difference between life or death and in regular cases be the difference between being cracked or not.
That's an exaggeration by far: only the tiniest subset of systems can kill people if they fail. Hacks happen by the millions with headaches being the main result along with lost time and money. Even most security-critical systems are the same with the main targeting being done for espionage (data theft). High assurance security focuses on what can get people killed (esp military use) with an emphasis on tools such as these. I've never seen a mainstream FOSS or proprietary product show evidence of an EAL6+ development process, though. Even security community largely throws stuff together plus some code review.
Best to keep it real about risks so your solutions match requirements. Most companies are happy to sell patches to broken software or offer software that passed many checklist items for compliance. They have lawyers for the rest.
I agree that ideally we should set our baseline much higher. It's why I promote low-defect methodologies, code reviews, static analysis, and languages (eg Ada, Haskell) that prevent/catch most problems early. The few empirical studies done on such things show it actually saves money with occasional productivity boost for one reason: huge reduction of debug time.
And the satisfied customer effect can't be ignored. ;)
At Machine Zone, we've been using formal methods to specify and verify systems we have in development. This is something we've only been doing in the last 6-9 months though.
By all means, please share it like Amazon did. We need to see and assess every use to both understand and argue their usefulness. Way too few anecdotes from industry.
http://muratbuffalo.blogspot.com/2014/08/using-tla-for-teach...
http://muratbuffalo.blogspot.com/2015/01/my-experience-with-...