Amazon notes many kinds of problems that lead to reliability and security failures that TLA+ helped knock out. Their engineers are sold on it now and have no intention of dropping it. Certain sentences in that article are nearly identical to those I read in ancient papers using ancient methods. The benefits of precise, easy-to-analyze specifications are apparently timeless.
Here's it done in Z via Altran's Correct by Construction methodology:
They apply about every lesson learned in high assurance in their work. Defect rate, from this demo to old Mondex CA, is usually around 0.04 per 1,000 lines of code. That's better than the Linux kernel.
Rockwell-Collins formalized a separation architecture, HW, microcode, etc then integrated it into a CPU:
NICTA, who did seL4 verification, use tools to model stuff in the language that causes security errors then use provers to verify they're used correctly. Example tool:
Lots of groups using lots of different tools with great results. The difficulty and impact on time-to-market varies. The use of compositional, functions or state-machine models with subsets of safe languages, design-by-contract (or just interface checks), static analysis, design/code review, testing, and compilation with conservative optimization seem to be the winning combo. There's free tools for all of that. It takes about the same time to code as usual while preventing lots of debugging during testing and integration.
http://cacm.acm.org/magazines/2015/4/184701-how-amazon-web-s...
Amazon notes many kinds of problems that lead to reliability and security failures that TLA+ helped knock out. Their engineers are sold on it now and have no intention of dropping it. Certain sentences in that article are nearly identical to those I read in ancient papers using ancient methods. The benefits of precise, easy-to-analyze specifications are apparently timeless.
Here's it done in Z via Altran's Correct by Construction methodology:
http://www.adacore.com/uploads/downloads/Tokeneer_Report.pdf
They apply about every lesson learned in high assurance in their work. Defect rate, from this demo to old Mondex CA, is usually around 0.04 per 1,000 lines of code. That's better than the Linux kernel.
Rockwell-Collins formalized a separation architecture, HW, microcode, etc then integrated it into a CPU:
http://www.ccs.neu.edu/home/pete/acl206/papers/hardin.pdf
NICTA, who did seL4 verification, use tools to model stuff in the language that causes security errors then use provers to verify they're used correctly. Example tool:
https://ssrg.nicta.com.au/projects/TS/autocorres/
Lots of groups using lots of different tools with great results. The difficulty and impact on time-to-market varies. The use of compositional, functions or state-machine models with subsets of safe languages, design-by-contract (or just interface checks), static analysis, design/code review, testing, and compilation with conservative optimization seem to be the winning combo. There's free tools for all of that. It takes about the same time to code as usual while preventing lots of debugging during testing and integration.