Hacker News new | past | comments | ask | show | jobs | submit login

> I haven't read Bossavit's critique (should I?) but anyone taking that position had better first make sure he's not standing on a swamp too.

Author of "Leprechauns" here.

I'm not sure I follow you in the above sentence. What happened time and again in my investigations was that I would find a citation (in McConnell or in Boehm or elsewhere) that was metaphorically accompanied with the statement "this here is a solid piece of land in the swamp".

When I got there, however, what I found was, in fact, just more swamp.

It takes a higher standard of proof to demonstrate that something is solid ground than to demonstrate there's a flaw in it. That may be unfair, but it's why scientists need a lot of training.

To take just one example, Graham mentions the Hughes Aircraft study, cited (by McConnell and others) in support of the usual "exponential" curve for defect cost (as "Willis, Ron R., et al, 1998").

When you actually read that paper - which is both one of the easiest to obtain and one that has the more detailed data about what was studied and how - and look for the raw data, you find numbers that do not obey an exponential law OR a monotonic increase. For one column for instance the numbers in fact vary within a narrow range .36 to 2.00 with two maxima at the "Coding" and "Functional test" phases, dropping off before, between and after. In a different column the costs vary only by a factor of two between Unit Test and System Test, by a factor of less than four between Code and System Test. The exponential rise is generally not true for the pre-1991 period; some post-1991 measurements get closer, but many do not.

It's hard for me to see this evidence as doing anything else but undermining the original claim of the exponential rise as a generally reliable regularity in software development.

Given this, I don't think that citing this paper in support of the claim is appropriate.




"I'm not sure I follow you in the above sentence."

I'm saying that if one is going to attack someone personally for dishonesty, as opposed to critiquing the state of the field in general, the bar for that is high and one had better be more than careful with one's own particulars. Reasonable people can interpret this stuff differently. The post linked to upthread [1] didn't strike me as dishonest (though of course it's only one side).

I'm glad you've been looking for solid ground in the swamp and finding it swampy. I think that's valuable. What bothers me are the hints in the OP, comments here, and elsewhere I've seen (including people talking about your work) that this is about one guy making shit up when in reality the problem is endemic to the entire field. Unless he's been egregiously dishonest, which I doubt, it's a distraction.

On another note, you seem like a good person to ask: is there any finding in the software engineering literature that you think holds up? i.e. have you found any solid ground in the swamp? I'm not sure I have (but I haven't looked nearly as hard as you). If there really isn't anything, that alone is kind of shocking.

[1] http://forums.construx.com/blogs/stevemcc/archive/2011/01/09...


> attack someone personally for dishonesty

Hackers hate ad hominem, and with good reason. I too subscribe to the school of "harsh on the problem, soft on the person". On the other hand, it makes no sense not to call out things that keep us in the swamp, or to tiptoe around important epistemic issues just to spare hurt feelings.

One problem we have is that few people are willing to go to great lengths to check out the available evidence; instead the pattern is to repeat claims (and associated citations) made by people who sound authoritative, accepting them essentially on faith. This has the unfortunate side-effect of magnifying the mistakes of people who have become authorities.

In "Leprechauns" and elsewhere, my focus isn't on what any particular person says, but on specific claims. "The cost of fixing defects rises exponentially as a function of SDLC phase" isn't tied to a particular person - it originated with Boehm but many others have propagated it. My method has been to look up the evidence and to see if it held up. Also to think hard about why studies may have failed to show conclusively what they set out to prove, and how to overcome these challenges.

I'm doing the same kind of thing in my own area, i.e. I'm collecting all available evidence, pro or con, about whether various Agile practices "work".

> is there any finding in the software engineering literature that you think holds up

There are many good ideas and rules of thumb, but when it comes to very general "laws", solidly established - that's harder. I've been asking that question over and over again, hoping to get a convincing answer. Still haven't got one.

I've read a bunch of supposedly solid references, e.g. Pressman, or the "Handbook of Software and Systems Engineering" and have been underwhelmed. (The first "law" proposed in the Handbook: "Requirement deficiences are the prime source of project failures," based on evidence like the Chaos Reports. My rebuttal: http://lesswrong.com/lw/amt/causal_diagrams_and_software_eng... )


The thing about "many good ideas and rules of thumb" is, I've got a few dozen of those of my own! Most of us do. It would be interesting if there were decisive evidence against any of them, but even when I read studies whose conclusions contradict my beliefs, the studies are so flimsy that I find it easy to keep my beliefs.

There does seem to be a recent wave of software engineering literature, exemplified by http://www.amazon.com/Making-Software-Really-Works-Believe/d... (which I haven't read). Are you familiar with this more recent stuff? Does it represent new research or merely new reporting on old research? If the former, are the standards higher?


I haven't read all of Making Software yet myself, but you would be interested in one of the first few chapters. I don't remember who wrote it offhand, but as I recall, in discussing the standards of evidence needed for software engineering the author concluded, and I am paraphrasing here, that hard numbers were difficult to get and came with many, many conditions; as a result anecdotes were likely the best you could do and were perfectly acceptable. (Was that enough disclaimers?)

You might be able to tell why I lost my enthusiasm for the book.


Thanks. My enthusiasm mostly consists of trying to get other people to read this stuff and tell me what it says :)

I think that's the argument for junking the SE literature. If it can't do any better than anecdote, well, to quote Monty Python, we've already got some they're very nice.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: