Minimizing the cost of producing secure, reliable, efficient systems that fulfill business objectives and are inexpensive to modify (where modifications preserve reliability and performance.) Included in the cost must be the risk of finding engineers to provide maintenance and changes.
Enterprise software perspective - IT managed directly by business users. In that, the system can accept natural language descriptions of business logic, resolve ambiguity, and can self-modify, maintain, and heal itself. In other words, a virtual software architect.
While I agree those are worthy goals, the middle two are CS, not SENG. Real AI is probably the loftiest goal in CS, but it requires a lot more theory work before anyone will be able to implement it.
There are a lot of things wrong with this analogy, but I think it still encapsulates a nice ideal. We'd like to get it right the first time, and make it last with minimal maintenance.
(This is for software engineering, not computer science)
Many bridge-building projects go over time and over budget.
Most bridges have a feature list fixed years before construction is begun.
Bridges are static. Even draw-bridges do not adapt to their environment automatically, but require human intervention.
There is a categorical error when you try to apply construction practices of static things to the construction practices of dynamic things, hence the failure of waterfall. Note that the original papers that introduced waterfall actually suggest a system that more closely resembles iterative development than BUFD.
If we found a way to make BDUF work reliably, it would transform the industry overnight, and turn software development into a recognisable engineering discipline. The fact that we haven't yet, and that one particular approach failed, doesn't mean that it isn't possible. Nor does it mean we should stop trying.
Iterative development is an interesting stop-gap, but it's certainly not the be-all and end-all we should be looking for.
> There are a lot of things wrong with this analogy
I think so. As a rule, when something can be developed incrementally rather than planned and engineered, it should be. This makes mistakes less expensive, allowing you arrive at a satisfactory end result in a fraction of the time.
This is one of the huge advantages of the web over desktop software. Initial delivery to the desktop and subsequent updates cost significant time and headache. Redeploying to the server is just a matter of running a script.
"This is one of the huge advantages of the web over desktop software. Initial delivery to the desktop and subsequent updates cost significant time and headache. Redeploying to a server is just a matter of running a script."
This raises a good point about reuse. You wouldn't try to re-use the same bridge across a different span without modifications, but we require our software to run in different contexts (be it different web browsers, operating systems, versions of the language, et cetera.)
I think the bridge analogy can work a different, equally effective way: we can build software based on well-established and tested engineering principles, with tools and materials that have a known, verifiable set of behaviors.
I'm not a materials/mechanical engineer, but it seems that engineers can build bridges properly the first time (most of the time, right?) because they are working against a set of well understood, well tested physical laws (for lack of a better word). Even I write the perfect, say, shopping cart application, that bit of software still relies on programming frameworks, web servers, operating systems, and hardware systems that have known issues.
If bridges were like software, as soon as a drunk driver ran through a bridge's guard rail the designes would have to follow up with a post that started, "We had some downtime from an accident, here is what we're going to do about it..."
The Holy Grail of software engineering — the one true approach to building software systems that can be applied, universally, to any and all software projects.
You know that scene in Independence Day when Jeff Goldbloom flies into the mothership and uploads a program to shut it down? That.
No, I don't mean an alien empire running on Mac OS 7.6.5, I mean extreme interoperability so software can rewrite itself to work on a different platform (self-porting, in other words).
Vinge's "superhuman intelligence" rhetoric bothers me because it suggests that intelligence is a one-dimensional quantity. I don't think this makes sense except in the context of specific domains. For example, a basic calculator has superhuman intelligence when it comes to arithmetic, because it can evaluate arbitrary expressions much faster than a human can and doesn't make mistakes. But it's dumb about everything else.
Update: The child comment by binaryfinery points to where Vinge addresses this. Sorry, I failed to add the disclaimer that I only skimmed the article.