Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What is the Holy Grail for Software Engineering?
15 points by __Rahul on Dec 4, 2010 | hide | past | favorite | 39 comments
Physics has its Theory of Everything - what's out there for Software Engineering?



Minimizing the cost of producing secure, reliable, efficient systems that fulfill business objectives and are inexpensive to modify (where modifications preserve reliability and performance.) Included in the cost must be the risk of finding engineers to provide maintenance and changes.


Enterprise software perspective - IT managed directly by business users. In that, the system can accept natural language descriptions of business logic, resolve ambiguity, and can self-modify, maintain, and heal itself. In other words, a virtual software architect.


Such system would be necessarily self-conscious with all severe problems it implies and human reproduction already produces such systems.


Holy grail, eh? Well, if you really want to swing for the fences...

* Programming a computer in plain English.

* Teaching a machine to compose (real) music, or draw (real) art.

* Simulating a brain -- or, even better, emulating one.

* A programming language that makes it possible to write code that's worthy of being read as literature.


While I agree those are worthy goals, the middle two are CS, not SENG. Real AI is probably the loftiest goal in CS, but it requires a lot more theory work before anyone will be able to implement it.


Building software like we build bridges.

There are a lot of things wrong with this analogy, but I think it still encapsulates a nice ideal. We'd like to get it right the first time, and make it last with minimal maintenance.

(This is for software engineering, not computer science)


Physical construction envy is naive.

Many bridge-building projects go over time and over budget.

Most bridges have a feature list fixed years before construction is begun.

Bridges are static. Even draw-bridges do not adapt to their environment automatically, but require human intervention.

There is a categorical error when you try to apply construction practices of static things to the construction practices of dynamic things, hence the failure of waterfall. Note that the original papers that introduced waterfall actually suggest a system that more closely resembles iterative development than BUFD.


If we found a way to make BDUF work reliably, it would transform the industry overnight, and turn software development into a recognisable engineering discipline. The fact that we haven't yet, and that one particular approach failed, doesn't mean that it isn't possible. Nor does it mean we should stop trying.

Iterative development is an interesting stop-gap, but it's certainly not the be-all and end-all we should be looking for.


> There are a lot of things wrong with this analogy

I think so. As a rule, when something can be developed incrementally rather than planned and engineered, it should be. This makes mistakes less expensive, allowing you arrive at a satisfactory end result in a fraction of the time.

This is one of the huge advantages of the web over desktop software. Initial delivery to the desktop and subsequent updates cost significant time and headache. Redeploying to the server is just a matter of running a script.


"This is one of the huge advantages of the web over desktop software. Initial delivery to the desktop and subsequent updates cost significant time and headache. Redeploying to a server is just a matter of running a script."

This raises a good point about reuse. You wouldn't try to re-use the same bridge across a different span without modifications, but we require our software to run in different contexts (be it different web browsers, operating systems, versions of the language, et cetera.)



I think the bridge analogy can work a different, equally effective way: we can build software based on well-established and tested engineering principles, with tools and materials that have a known, verifiable set of behaviors.

I'm not a materials/mechanical engineer, but it seems that engineers can build bridges properly the first time (most of the time, right?) because they are working against a set of well understood, well tested physical laws (for lack of a better word). Even I write the perfect, say, shopping cart application, that bit of software still relies on programming frameworks, web servers, operating systems, and hardware systems that have known issues.


If bridges were like software, as soon as a drunk driver ran through a bridge's guard rail the designes would have to follow up with a post that started, "We had some downtime from an accident, here is what we're going to do about it..."


The Holy Grail of software engineering — the one true approach to building software systems that can be applied, universally, to any and all software projects.

http://www.cse.unsw.edu.au/~se4921/PDF/CACM/p15-glass.pdf


A self programming, learning computing machine.


You know that scene in Independence Day when Jeff Goldbloom flies into the mothership and uploads a program to shut it down? That.

No, I don't mean an alien empire running on Mac OS 7.6.5, I mean extreme interoperability so software can rewrite itself to work on a different platform (self-porting, in other words).

Having said that, their firewalls were lacking!


There's really three problems I can think of, in decreasing order of difficulty:

1) writing an AI,

2) proving P=/=NP, and

3) founding a 200-billion dollar company.


These are computer science, not software engineering.

Oh, and #3 is neither...


Doing correct time estimates? Finding an engineering methodology that actually works?


A lot of the suggestions so far have really been for CS, not software engineering.

For software engineering, how about a practical way to do proofs of correctness for large real-world systems written in mainstream languages?


Users that read the manual



Let us not conflate computer science with software engineering.


Ahh, yes, my bad.


To make software so extensible that you never have to rewrite it, or so painless to rewrite that you never mind.


A programming language that does exactly what you want, exactly how you want it, exactly when you need it.


How to talk to women? Couldn't resist.


A model checker that somehow cleverly handles external input and statespace.

Lacking that, haskell's type system :D


Something that could have answered this question for us.


"Good, Cheap and Fast" and not being forced to pick two.


A general purpose memory manager that doesn't suck.


Provably secure, stable, error free code.


It's a Holy Grail. It's a mystery!


Becoming more than human.


The Singularity.

http://mindstalk.net/vinge/vinge-sing.html

I think Vinge explains it better than Kurzweil.


Vinge's "superhuman intelligence" rhetoric bothers me because it suggests that intelligence is a one-dimensional quantity. I don't think this makes sense except in the context of specific domains. For example, a basic calculator has superhuman intelligence when it comes to arithmetic, because it can evaluate arbitrary expressions much faster than a human can and doesn't make mistakes. But it's dumb about everything else.

Update: The child comment by binaryfinery points to where Vinge addresses this. Sorry, I failed to add the disclaimer that I only skimmed the article.


FTFA: "Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any any man however clever"


Here's a real holy grail of software engineering that might actually be attainable without getting Strong AI.

Significant code reuse without using a lot of hacks. I think Lisp and Ruby made great strides towards this, but I don't think we're quite there yet.


KISS




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: