Hacker News new | past | comments | ask | show | jobs | submit login

There are plenty of shops that apply rigorous, formal methods to software design. Military contractors providing software for mission critical systems do this (or used to). Some will cut corners, but they are supposed to follow the methods and procedures the contracts specify.

It isn’t common in private industry because it’s typically deemed unimportant.




There’s a spectrum of formal methods and with the push toward static types in Python coming directly out of industry I would argue that industry is definitely deeming it important in some degree; the question is to what degree. Right now a lot of the time it’s only worth it for the most critical systems (S3, for example, but not my side project webapp) but if tooling improved, it would be worth it for more cases.


You aren't paid either to write unit tests in the private industry.

You are paid to write working code.


Depends on where you work. The cult of unit testing is pervasive in the Bay Area, at least. It doesn’t result in noticeably better software.


I once had this fun contract to fix code in one giant application written in PHP. The org of decent size who wrote said code in a first place had code reviews, unit tests and what not.

The end result was that they had to hire outside person to fix what they have produced. I looked through the software and noticed whole bunch of iffy patterns. Ended up writing giant Python script that searched said patterns and fixed it where possible marking with appropriate comments and issued a warning comments where it could not.

Not sure if this is a regular occurrence in companies that write and maintain their internal software as this was the only time I did job like this.


Does your opinion only apply to unit testing or to automatic testing in general?

I can't imagine the kind of software that wouldn't benefit from automatic testing (unless you would be writing the simplest of CRUD web apps or doing it completely wrong).

From personal experience I see both better code design from being forced to make the code testable and less bugs due to writing tests revealing bugs I wouldn't have found with manual testing before they go into production.


> Does your opinion only apply to unit testing or to automatic testing in general?

In my experience once people learn about tests, they want to see 1. tests per commit and 2. high code coverage. This leads to unit tests, which don't really test much of anything, and what they do test (the layers under you like the compiler) is by accident.

The most useful testing would be integration tests on the side, and then regression tests because they're guaranteed to have already found at least one bug.


This, exactly.


I agree, automatic testing would be nice. Unfortunately, I'm the one at work having to do write the said automatic testing of a component that makes parallel DNS requests [1] in an event driven server [2] and need to test that we cover the case where A returns, then B, and B returns, then A, A returns, B returns after our timeout [3], B returns, A returns after our timeout, A returns, B never does, B returns, A never does, while at the same time, making sure the ping tests (the components send a known query to each server to ensure each server is still available [4]) go through because if they don't that particular server will be taken out of rotation. God, I would love to have that automatically tested. What I don't like is having to automate such tests.

[1] To different servers that return different information for a given query. The protocol happens to be DNS.

[2] Based upon poll(), epoll() or kqueue(), depending upon the platform.

[3] We're under some pretty tight time constraints for our product.

[4] DNS, over UDP.


Do you know of any formal study that talks about this?


It's pretty easy to find various papers/studies about TDD, unit testing, and the like by searching online but I would not call any of them scientifically or empirically rigorous. Given the lack of any objective or agreeable standards regarding what "code quality" means, these papers can draw conclusions to support any view.


You'd be surprised. Often, the tests are more work than the "working code" itself. At companies that have a test heavy focus, I'd say roughly 50% or 60% of the effort is spent on tests (this includes future work to maintaining tests as they are affected by other areas of the code, etc.) You may not be able to get your PR approved without enough tests.


How do you know it's working if it doesn't have unit tests that exercise edge cases? Or even basic functionality? Also, good unit testing can allow you to reduce the time it takes to write software by giving you an isolated environment to run your code.

Unit tests are a tool to get you to working code.


With interpreted languages it is trivial, you just try parts of your code. For python there's for example ipython which makes the experience really great.

Also I don't agree with GP, there are places that require unit tests, especially if multiple people are working on a project. You could be a developer that never made a bug in your life, but others might not be as great :) besides many bugs also happen by changing other parts of the code. Some of your assumptions might have been 100% correct when you wrote the code, but someone can change different part of the code and now your function is called with values that were previously impossible.

The great thing about statically typed languages is that they might detect large part of these bugs, but you're out of luck with dynamically typed languages.

Example of this was the Python 2 -> Python 3 migration. Having a large test suite helped tremendously.


No. Failed tests can only tell you that you've found a problem; passed tests can't tell you there aren't any problems present. Provably correct code can be proven correct.


Hmm. What if your unit tests are statements about your code? I view them as a way of locking in functionality and saying that the code does or doesn't do things for certain inputs. That's what unit tests are in my mind. I find it strange to view tests as only way a of catching bugs. You just use them to say what your code is/isn't capable of doing.

Using tests as a way of finding problems seems like not a great way of saying code has issues x,y, and z but they are a great of saying something does a, b, c. If you are going through a refactor you might not be implementing a, b or c correctly anymore and if your tests are designed well this will be caught.

If we are only talking about bug catching, you are right there are better tools. But I'll say if somebody says code does something and there are no tests demonstrating that fact, I don't trust it generally unless it's something like a game where it's hard to test certain things.


> Provably correct code can be proven correct.

there are two different types of formal proofs:

1. there is at least one error in the code, or

2. there are no errors at all

The [1] is much easier than [2].




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: