I've worked in aerospace. The organizations (both public and private) would all claim to fail "within the confines of a safe environment".
One small BS detector is when there's some unplanned/unmitigated test outcome that gets characterized as a "test anomaly" rather than being transparent about the details.
>they've already mapped out the failure modes and are able to prevent them when it really matters
This remains to be seen. The shuttle also had all their failure modes mapped out. As did CST-100. Yet massive failures still occurred.
Well in the case of the Shuttle both accidents that killed people were due to previously identified failure modes. At the end of the day, risk will always be a number greater than 0%. At some point someone is going to have to make a judgement call that the risk is low enough to proceed, and sometimes that call is going to be wrong.
The failure modes may have been known but the effects were not. A FMEA needs both to work.
Regarding the foam, they had a difficult time even recreating it after the fact. It was apparently only on a lark that they decided to turn the gun up to 11 and, viola, now the foam had the physical properties capable of damaging the tile catastrophically. So, yes, they knew the mechanism of foam shedding but did not realize the effect properly.
With the o-rings, they similarly just didn’t have the test data for this conditions. They incorrectly extrapolated on the test data they did have.
One small BS detector is when there's some unplanned/unmitigated test outcome that gets characterized as a "test anomaly" rather than being transparent about the details.
>they've already mapped out the failure modes and are able to prevent them when it really matters
This remains to be seen. The shuttle also had all their failure modes mapped out. As did CST-100. Yet massive failures still occurred.