Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The article fails to demonstrate how code-tests result in objectively better code. Many comp sci programs have courses on testing that cover TDD, unit testing and fuzzing, among other topics.

Yet much of the safety critical code we rely on for critical infrastructure (nuclear reactors, aircraft, drones, etc) is not tested in-situ. It is tested via simulation, but there's minimal testing in the operating environment which can be quite complex. Instead the code follows carefully chosen design patterns, data structures and algorithms, to ensure that the code is hazard-free, fault-tolerant and capable of graceful degradation.

So, testing has its place, but testing is really no better than simulation. And in simulation, the outputs are only as good as the inputs. It cannot guarantee code safety and is not a substitute for good software design (read: structures and algorithms).

Having said that, fuzzing is a great way to find bugs in your code, and highly recommended for any software that exposes an API to other systems.





>fails to demonstrate how code-tests result in objectively better code.

Tests give the freedom to refactor which results in better code.

>So, testing has its place, but testing is really no better than simulation

Testing IS simulation and simulation IS testing.

>And in simulation, the outputs are only as good as the inputs. It cannot guarantee code safety

Only juniors think that you can get guarantees of code safety. Seniors look for ways to de-risk code, knowing that you're always trending towards a minima.

One of the key skills in testing is defining good, realistic inputs.


I don't understand what the difference between a simulation and a test is?

Mostly just semantics.

There is none, and that's my point. Simulations themselves are contrived scenarios that are not representative of production environments.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: