Hacker News new | past | comments | ask | show | jobs | submit login

I think I have to disagree with Naur on this, in that people using the Scientific Method don't ship their theories, but we do.

As a scientist who has just succeeded in testing a hypothesis, I now need to go back and document a simplified series of steps that should lead any independent party to the same phenomenon. Once we are on the same page, they can confirm or refute my theory based on their own perspectives on the problem space.

During that process I may discover that I based half of my experiment on another hypothesis that I never tested, or was plain wrong. Now I've discovered my 'load bearing' assumptions. I may discover something even more interesting there, or I may slink away having never told anybody about my mistake.

Essentially, scientists still 'build one to throw away'. We haven't in ages. And my read on Brook's insistence that we build one to throw away is that it was aspirational and not descriptive. And notably, he apparently recants in the 20th anniversary edition (which is itself 25 years old now):

> "This I now perceived to be wrong, not because it is too radical, but because it is too simplistic. The biggest mistake in the 'Build one to throw away' concept is that it implicitly assumes the classical sequential or waterfall model of software construction."

So we are very much at odds with the scientific method. And we have the benefit of hindsight. We have seen the horrors that can occur when you take the word Theory out of context and try to apply it to non-scientific theories. We should learn from the mistakes of others and summarily reject any plan where we do it too.

In other words: next metaphor, please, and with all due haste.




I think I have to disagree with you on this one, I've used the scientific method (though not in an explicit checkbox-y way) plenty of times to ship and debug code.

In particular, since (as I've said on this forum many times) I work primarily on the maintenance end of software. I don't know what the creators or previous developers were thinking, especially with more recent projects (documentation quality has really gone down hill, people call autogenerated UML diagrams "design docs", but without commentary they only reflect the state of the system, not its design). I have to try different changes based on my understanding of the system and see the consequences. That is, I form a hypothesis about what will happen if I do X, I do it, I collect the results and I've either confirmed my hypothesis, refuted it, or left it in an indeterminate state. I form another and repeat. Over time I build up a model (theory) of how the system behaves and should be updated/extended. Since I can't keep tens of thousands of lines of code in my head, let alone hundreds of thousands or millions, I always only have a model (theory), because I never have the totality of it in my mind. Though good code, with good use of modules, makes it easier to keep large chunks in mind, I still have to have a model of how those modules work and work together.

Hell, this is half (or more) of testing for older software systems. You put in some input and see if you get the output you expected. If you don't, you evaluate why (is my model wrong or is the system wrong) and repeat.


I don't mean 'use' as in a #7 torx wrench. I mean 'use' as in air.

I have shipped bug fixes using organized hypothesis checking as well. Especially sanity checks (make sure the instruments are working). But it is not the software developer's default behavior, and I'm sure you've lamented it just as I have. You and I are tourists, and many around us aren't even that. So when we speak of whether 'we' apply formal rigor to our work? Is it still rigor when there is no discipline? I don't think rigor is something you do on a random Thursday. It's something you do all the time.

So no, 'we' do not use the scientific method. We dabble.

And so when someone like Naur tries to summarize software with a line about theory proving, he's not speaking about everybody. If he were honest he might not even be speaking accurately about himself.

ETA: But he's talking about the long arc, not a single bug fix. That we are circling in on what the actual problem is and feeling it out with code. But since we stop at "if it ain't broke don't fix it", we never actually crystallize the thing we built. We never test the hypothesis we suppose that we have created. We have spot checked this organic thing that never gets pinned down and might actually be DOA. We hope the evidence we are wrong is just 'glitches' or problems with the user's machine. Until someone comes to us with a counter-proof that shows unequivocally that we were wrong.

Which leads to problems like those mentioned in this comment tree.


Insofar as "what happens when I do this?" goes, it neglects the null hypothesis. If you don't pay attention to falsifiability can you claim to be doing science?


I use the scientific process constantly while shipping code (most of my time is spent writing fixes for large production systems that are being actively used, where a regression could cost millions of dollars). In particular, I explicitly state my hypotheses, and use positive and negative control experiments when evaluating my fix.

I often "build one to throw away", but half the time what I build is good enough that it goes into production and lasts for a while.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: