> The guarantee is not that the system never enters one of those states, the guarantee is that if the system enters one of those states it never (well, so extremely rarely that it's "never" for all practical purposes) fails to recognize that it has done so. Why do you find that so implausible?
Stated so, that is plausible. However, "it is possible to engineer a system that never fails to detect that it has failed" is not; I claim that any subset of states which is amenable to this level of detectability will exclude some other states that any normal person would also consider to be "failure".
> Stated so, that is plausible. However, "it is possible to engineer a system that never fails to detect that it has failed" is not
They seem the same to me. In fact, in 27 years you are the first person to voice this concern.
> I claim that any subset of states which is amenable to this level of detectability will exclude some other states that any normal person would also consider to be "failure".
Could be, but that would be a publishable result. So... publish it and then we can discuss.
Having all your sensors fail is actually a very easy case. Imagine if all of your sensors failed: suddenly you could not see, hear, feel, smell, or taste... do you think it would be hard to tell that something was wrong?
Having your sensors fail doesn't mean they're not providing data. It means they're not providing accurate data. In humans, we would call this hallucinating, and humans in fact cannot generally tell that they are hallucinating.
Stated so, that is plausible. However, "it is possible to engineer a system that never fails to detect that it has failed" is not; I claim that any subset of states which is amenable to this level of detectability will exclude some other states that any normal person would also consider to be "failure".