Hacker News new | past | comments | ask | show | jobs | submit login

You've never shipped with known bugs?

If someone asked you "If it isn't important whether the software works or not, why ship it?", what would your reaction be?

Note: > Directional correctness facilitates the same conversation




One reason I think these meetings are sometimes so boring is because there is a sense that we are looking at numbers that don't matter. That sense is reinforced if you discover that the values of the numbers we are seeing literally don't matter.

I also think there is a difference between software and spreadsheets. At least, in the role I was in, software let us do things and spreadsheets let us reason about the business.

If there was a bug in software, say, 5% of requests to save a configuration failed, and that failure triggered an automatic retry, and the result of this bug was elevated configuration save latency at the 95+ percentile, we might decide to ship that bug (depending on other factors). On discovering the bug we would try to understand its impact and then we would reason about the impact on the software's goals. For example, we might say that people rarely change configurations, and an additional 100ms of latency for 5% of customers is something we can live with until such time as we solve the underlying cause.

What we would not do, in any software org that I've been apart of, is say "Hey, we notice some of our operations are failing sometimes, anyway - let's ship it!" To me, that seems analogous to what you are saying. You know there are errors in the spreadsheets but you ignore them on the assumption they are inconsequential, reasoning that if they were consequential you or your team would have noticed them.

There are ways for bugs to exist in software that don't block what the customer is trying to do. Maybe the bug makes it harder to accomplish a workflow, or makes it take longer, or looks silly, or something like that - and those are bugs it may be okay to ship with (depending on context).

If values are wrong in a spreadsheet, then the only way I could think that doesn't matter is if the values are unimportant. That is, either the values are important and it matters if they are wrong, or it doesn't matter if the values are wrong because they are unimportant. If the values are important, let's correct them. If they are unimportant, let's not discuss them.


I’m with you until the last paragraph. I think I get what you’re saying but it just wouldn’t work in practice, in my experience. If I were the sole consumer of the spreadsheet, maybe. But, alas, other people have differing opinions and we have to play nice. Specifically; Important and Unimportant are subjective. Agenda for a meeting is usually not set in stone. “Business review” is fairly ambiguous. Change over time? Do you want to know something is trending poorly early or late? Keep an eye on it. Audience likes to see things a certain way, you comply. Even if some of it is repetitive or unimportant. So on.


I see. I was imagining a much more preplanned kind of thing, with screenshots on slides. In that case it makes a lot more sense.


I would say that most software doesn't have to be correct to be useful. There are exceptions, like anything to do with cryptography.

You might say that the same can be true of calculations; after all, calculations about the contingent universe (as opposed to, say, number theory) are always based on data incorporating some uncertainty. But there are two key differences:

① I don't put up a slide showing all the intermediate values of the variables in my program in order to make an argument for something. Instead, I try to show my chain of reasoning in enough detail to be convincing and to expose any relevant potentially incorrect assumptions or reasoning steps I've made so that others can find my errors, without including trivial or irrelevant details. Including trivial and irrelevant details makes it harder, not easier, to find relevant incorrect assumptions or reasoning steps.

That's why I question the motivation of people who include those in their slides: it sounds like they're attempting to make their argument so complex that there are no obvious errors, rather than so simple that there are obviously no errors.

② A 10% uncertainty in an input datum can be traced through the calculation from beginning to end and will often result in a 10% or 21% uncertainty in the result, or even less. (And where that's not the case the sensitivity should be called out.) A calculation that's simply incorrect—for example, treating millions as thousands, or forgetting to divide by the relevant denominator—commonly produces results that are off by multiple orders of magnitude.


A math error in a spreadsheet can change the entire conclusion to be drawn from the data, it is very different from known bugs users can work around.


But both can only be ascertained in hindsight and knowledge of the issue. Software issues can, and do, also cause a lot of issues with the data that spreadsheets are analyzing so I don’t necessarily see it as different. Also a lot of spreadsheet info is just info. It’s not necessarily being used as a crucial decision criteria.

There’s such a wide spectrum of situations it’s hard to be too rigid with this. My software analogy was just pandering to this audience. Probably not a perfect analogy.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: