It's not like we can't design very elegant, robust, reliable software, you know.
We just can't find anybody to pay for us to retool the whole stack (and I do mean the whole stack, since we're only as strong as our weakest link) while the current ad hoc solution operates within acceptable parameters.
The guy who wrote this paper, in my opinion, is missing two really bedrock principles of "pure" engineering-- manufacturing tolerances and cost. If it works as well as it needs to and comes in under budget, it's miller time. There's a reason every stereo ever made goes -clunk- when you push the 'on' button. There's a reason none of the walls in your house are exactly plumb.
The other real serious mischaracterization here is likening software to a physical product, even one as complicated as a skyscraper.
Software is a factory that makes products, whether they're html pages, or graphics on a screen, or inputs to an industrial controller. You start looking at how to engineer and design and manage factories and a lot of the chaos of computing looks very familiar. E.g."367 days since someone lost a limb in a major industrial accident."
As in a factory, Quality in software is a result of a complex myriad of factors, but it reduces to some simple concepts: an understanding of psychology, a deep understanding of statistics, knowledge and application of systems theory, and the simple idea of epistemology and scientific method applied to management and production. Finally, leadership and universal application of these concepts throughout an organization, using a PDCA cycle of continuous improvement (Agile is a rough one).
W. Edwards Deming had this all exactly correct way back in the 1940's, after WWII, where he taught these concepts to Japanese companies, transforming them from cheap crap makers into the quality powerhouse economy we know and love.
We should listen to him again. That's the "new software development" we need.
I've been increasingly wondering if the cost of building reliable, effective, and secure software in all the places we use it... is just more than we can afford on a social level (like a higher percentage of GDP). With that price mostly being people, of course.
When we say "We don't know how to build good software", does that really just mean "We don't know how to build good software cheaply enough for the businesses that use it to still remain sufficiently profitable"?
I may not have said this quite right, I keep coming back to it and trying to think it through more. It's not a popular thing to say on HN.
Given the costs involved in making for real high reliability software (think planes, pacemakers, Mars rovers), profits don't factor into it at all-- we're looking at 10x the costs at a minimum. Nobody pays for that level of reliability without an excellent reason.
I'm not talking about making all software as reliable as is needed for planes, pacemakers, etc.
I'm talking about making all software as reliable and secure as appropriate for it's context. I think there's a general opinion amongst many (esp software engineers), that most currently deployed software is not as high quality as it should be, thus the concern on "What are we doing wrong, do we not know how to make quality software?"
Of course, the context and the consensus expectations for reliability/security can change, which is part of what's happened, as software has become more integral to the society and economy.
Well, I think I mostly agree. The only distinction I would draw is "good enough for its context" is synonymous in my mind for "as good as I'm willing to pay for." Everybody wants to pay for hamburger and eat caviar, but that ain't how the world works.
For all the increased whinging about software reliability, I have not seen a corresponding increase in what people are willing to pay for software. This indicates to me that we've achieved a level of rough market equilibrium.
But I would be thrilled to be wrong about that. I have a whole laundry list of refactors I'd love to take on. Cleaning up after myself is a luxury I am ill afforded.
We just can't find anybody to pay for us to retool the whole stack (and I do mean the whole stack, since we're only as strong as our weakest link) while the current ad hoc solution operates within acceptable parameters.
The guy who wrote this paper, in my opinion, is missing two really bedrock principles of "pure" engineering-- manufacturing tolerances and cost. If it works as well as it needs to and comes in under budget, it's miller time. There's a reason every stereo ever made goes -clunk- when you push the 'on' button. There's a reason none of the walls in your house are exactly plumb.