Hacker News new | past | comments | ask | show | jobs | submit login

Yes, i would still classify the software defective. It did not reject backwards explicitly. Just because by accident it works (i.e., the consumer of the output doesn't care) doesn't mean the defect has gone away.

To be not defective, the software has to explicitly reject input that it was not designed to handle.

Imagine if the software updated with some changes, and the unknown input now produces an incorrect output. Is the defect introduced with the changes? Or was the defect always there?




> To be not defective, the software has to explicitly reject input that it was not designed to handle.

In some cases that breaks forward capability. e.g. the case where there is an unknown XML tag. You could reject the tag or message. You'll end up rejecting all future change inputs.

If the whitelist of acceptable items is large, it may be acceptable to have a black list however if the above holds, you don't know what you don't know.


The middle ground may be explicitly flag/indicate/log that an unknown situation has been encountered, and 'handle' that by doing something useful (continuing to work without crashing, preventing known "unknown" data from being processed silently, etc). It may not help with forward compat entirely, but it would be explicitly known (and I'd think would be somewhat easier to modify/extend for known unknowns in the future).


I've been there, painfully. On my last day on a job, some code I wrote went into production, and the databases started filling up rapidly, to the point where the whole system would halt in a few hours.

Turned out the bug had been latent in the code for 5+ years, predating me. Its data consumption had never been observed before because it was swamped by other data consumption. Changing the architecture to remove that other data brought it to the foreground.

(fwiw, the bug was caused by the difference between 0 and null as a C pointer!)


What would "reject input that it was not designed to handle" look like for an automated car?


When you come around a curve near sunrise or sunset, you may suddenly encounter visual input that overwhelms your sensors. The sun is blinding you. It might overwhelm infrared sensors, too.

If you have alternate sensors, you should trust them them more, and camera systems less.

If you have a sunshade, you should deploy that.

If it is raining, or partially cloudy, the situation may change rapidly.

And perhaps you should slow down, but if you slow down too fast, other vehicles might not be able to avoid you.


You could also argue that your expectations are defective. It is possible to accidentally solve a problem in a correct manner.


Not reliably.

It's not professional to design systems that rely on luck.

"Let's ignore this edge case and hope we get lucky" is not something you want to see in a software specification.


Or simply acknowledge that your initial specs didn't cover enough, update the specs, test the "new" functionality, and call it a feature in the release notes.


Getting lucky is not the same as relying on luck.


Where do you fall on autonomous cars? It’s okay to be anti. I’m just curious




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: