Analogies like "painting a room" are pretty useless because they are very misleading. Especially when it is an innovative product, a better analogy would be: "painting a room on Mars".
With software development, there are a lot more "unknown attributes" that determine the main part of the effort estimation unless it is a repetitive task.
So, a critical/skeptical engineer would always drive the customer nuts by asking questions nobody can know and the conclusion will end somewhere like it is not possible to give a upfront estimation and stop the project. Or the effort to give the estimation approaches the effort to develop.
There are several different categories of software projects, as you say, the innovative ones might fit your definitions, but I'd argue that most don't.
The most common kind of software projects I've encountered are very straightforward, mostly just CRUD with an interface + business logic, and reporting, or relatively straightforward websites. Those are the "paint the room" type of projects.
There are challenges with those kinds of projects, but they should be reasonably easy to estimate, as long as the team is somewhat familiar with their development tools.
Other kinds of projects ARE more difficult because they have unique or innovative elements (probably the case for a lot of startups). Those might be the "paint a room on Mars" kind of projects.
As an employee, I've encountered the first kind 90% of the time. As a startup founder, I have encountered a lot more unknowns, but I've still found my project reasonable to estimate.
Before NASA sent their first drone to Mars they made a lot of predictions, a lot of tests, guessed a lot of things that may get wrong, etc. Do some of their missions still fail? Sure. But the amount of success is much much larger because they didn't just sent a drone there and watched what happened.
So even if all you say is correct it's still worth improving one's estimation skills.
Since Apollo, NASA has been suffering from too large projects. It's a spiral: once a project grows beyond a certain size, failing would be politically catastrophic. Hence the project must have even more tests and simulations, and the time table and cost move to the right, and it gets even more risk averse. The vicious cycle.
Instead if there were small enough missions and failure was tolerated as an unfortunate part of the process, progress would be a lot faster, and new technologies could be attempted etc.
This was the whole Faster Better Cheaper approach. Small teams of highly competent people, quick iteration, little bureaucracy...
Pathfinder/Sojourner was a FBC mission. It had a freaking webcam there. :) Opportunity/Spirit were somewhat bloated and the latest Curiosity rover was already a relatively traditional megaproject. Naturally there has been science instrument maturation etc as well, it's not a one sided story, but it is important to understand.
Look how many launchers SpaceX has developed and how much it has iterated, while NASA has worked on SLS. Sure, SpaceX has crashed and burned plenty of rockets and test vehicles on the way. It has changed directions dramatically. But looking as a whole, it has made large progress.
There is some kind of spectrum here. Of course if you launch deep space probes for 20 year missions, you test differently than for a launch test vehicle. But if you had many of those space probes as well, it wouldn't be so catastrophic if some of those failed.
> Before NASA sent their first drone to Mars they made a lot of predictions, a lot of tests, guessed a lot of things that may get wrong, etc. Do some of their missions still fail? Sure. But the amount of success is much much larger because they didn't just sent a drone there and watched what happened.
How much effort went into estimating Opportunity's lifetime at 90 days? How useful has that estimate proven to be?
That is cherry picking. Did you try to comprehend the post you are replying to? Take into account all of the estimations, not just one failed estimation.
Sure, take into account all the estimations. How much value have they delivered? Show me how NASA has got more science done than if they had "just sent a drone there and watched what happened". Because I don't believe they have.
Not a useless analogy, it gets the point across that good estimation requires breaking a task down into little steps, questioning the requirements, discovering constraints, and iterating.
The estimation process doesn't change much whether you're painting a room on earth or on Mars. If there's more uncertainty when estimating steps, you add more buffer time for instance. The goal of the exercise is to improve your estimation. Where the room is located should simply be factored into your estimation process.
With software development, there are a lot more "unknown attributes" that determine the main part of the effort estimation unless it is a repetitive task.
So, a critical/skeptical engineer would always drive the customer nuts by asking questions nobody can know and the conclusion will end somewhere like it is not possible to give a upfront estimation and stop the project. Or the effort to give the estimation approaches the effort to develop.