A system that can sense when it is in a failure mode is analogous to a “detector” in the Neyman-Pearson sense. A detector that says it is OK when it is actually in a failure condition is said to have a missed detection (MD).
OTOH, if you say you’ve failed when you haven’t, that’s a false alarm (FA).
In general, there is a trade-off between MD and FA. You can drive MD probability to near-zero, but typically at a cost in FA.
Again in general, you can’t claim that you can drive MD to zero (other than by also driving FA probability to one) without knowing more about the specifics of the problem. Here, that would be the sensors, etc.
In particular, for systems with noisy, continuous data inputs and continuous state spaces -- not even considering real-world messiness -- I would be surprised if you could drive PD to zero without a very high cost in FA probability.
As a humbling example, you cannot do this even for detection of whether a signal is present in Gaussian noise. (I.e., PD = 1 is only achievable with PFA = 1!) PD = 1 is a very high standard in any probabilistic system.
You've done a great job of describing the problem. It is manifestly possible to drive missed detection rates very close to zero without too many false alarms because humans are capable of driving safely.
Ah, yes... you have convinced me - though humans can apply generalized intelligence to the problem, which I imagine is particularly useful in lots of special cases.
So we can get within epsilon for an undefined delta because humans can do something, although not always.
Right, that whole claim about engineering a system that always knows when it’s not working sounds rock solid to me. After all, we can build a human, right?
OTOH, if you say you’ve failed when you haven’t, that’s a false alarm (FA).
In general, there is a trade-off between MD and FA. You can drive MD probability to near-zero, but typically at a cost in FA.
Again in general, you can’t claim that you can drive MD to zero (other than by also driving FA probability to one) without knowing more about the specifics of the problem. Here, that would be the sensors, etc.
In particular, for systems with noisy, continuous data inputs and continuous state spaces -- not even considering real-world messiness -- I would be surprised if you could drive PD to zero without a very high cost in FA probability.
As a humbling example, you cannot do this even for detection of whether a signal is present in Gaussian noise. (I.e., PD = 1 is only achievable with PFA = 1!) PD = 1 is a very high standard in any probabilistic system.
Discrete-input systems can behave differently.