With all due respect I must say that I find that you underestimate how much tests help.
If I implement a network protocol I will make sure to write automated test involving clients and servers. Setting this up on localhost or on a virtual network using TUN/TAP is not that hard. And this has made me find TONS of bug ahead of time.
I hear a lot of arguments as to why testing networked or otherwise distributed things isn't necessary. But just look at Aphyrs complete destruction of well-known distributed systems by using realistic testing.
And don't claim that the kernel network stack isn't tested. It's just tested by hand. Plus, there are projects like autotest. Without testing, I don't think there's any chance that the kernel devs could release any more versions.
Safe languages, formal verification, using proven methodologies and testing are all separate methods of improving code quality. But I don't believe any one precludes the other.
I'll end with a famous quote from Knuth: "Beware of bugs in the above code; I have only proved it correct, not tried it."
>If I implement a network protocol I will make sure to write automated test involving clients and servers. Setting this up on localhost or on a virtual network using TUN/TAP is not that hard. And this has made me find TONS of bug ahead of time.
I agree that it's helpful, but the problem is coverage. Your test code, most likely, doesn't cover EVERY single possible condition that can happen with a simple TCP/IP connection. Especially once you get out of localhost land, where you're dealing not only with your code, but all the hardware and software between the two systems
The fundamental problem is that even with a rigorous test suite, you're probably going to run into things you didn't even think were possible once the code is out in the wild. For example, we just ran into a scenario where we were seeing corruption through a TCP connection. Knowing the wire, it was impossible for the packets to be appearing in the way that they were(there was some packet level corruption). After going through multiple wireshark logs, we found that the culprit was a hardware firewall in between the server and client. Thankfully, our code didn't crash, but it's also something that we never tested for, because(in theory, at least), it should never be possible for that specific corruption to be sent in the first place.
If I implement a network protocol I will make sure to write automated test involving clients and servers. Setting this up on localhost or on a virtual network using TUN/TAP is not that hard. And this has made me find TONS of bug ahead of time.
I hear a lot of arguments as to why testing networked or otherwise distributed things isn't necessary. But just look at Aphyrs complete destruction of well-known distributed systems by using realistic testing.
And don't claim that the kernel network stack isn't tested. It's just tested by hand. Plus, there are projects like autotest. Without testing, I don't think there's any chance that the kernel devs could release any more versions.
Safe languages, formal verification, using proven methodologies and testing are all separate methods of improving code quality. But I don't believe any one precludes the other.
I'll end with a famous quote from Knuth: "Beware of bugs in the above code; I have only proved it correct, not tried it."