There is a long list of entirely preventable human-caused accidents. Is there a reason pilot-caused crashes are less scary for you? Computer caused accidents will be fixed and won't happen again. Human-caused accidents will keep happening as long as experience is valuable.
Aeroflot Flight 593 - pilot let his son fly the plane, 63 dead
Germanwings Flight 9525 - (possibly suicidal) pilot deliberately crashed , 144 dead
Air France Flight 447 - pilot caused airplane to stall, 228 dead
Aero Flight 311 - both pilots got drunk, 25 dead
and this is just a random selection, there are long long lists of human-caused aviation accidents.
Therac-25 wasn’t a “it’s not ready yet!”-type issue. It wasn’t an expected or anticipated failure-mode - it only became a (literal) textbook case-study after people died and the industry has learned and improved as a consequence.
They ignored repeated failures and evidence of malfunction by saying it was “impossible” that it could be failing in that way.
Unexpected failure modes are the issue. The Boeing 737 max 8 failure being tied to one sensor would suggest the industry has not fully learned the lesson.
My understanding is that it was a UX issue - the "malfunctioning" was the system working as-directed by the user, but the UX was horrible for informing the user what they were doing.
That isn't really true. It was brought down by stall prevention software that was using input from a single faulty sensor, and there was no way to override the inputs from this software. Further, there were multiple incidents before boeing admitted what was happening, even though in retrospect it looks like they knew what was happening all along.
My point was that it functioned in a manner vastly more similar to a conventional autopilot than what Airbus is proposing to do in this project.
MCAS was a simple algorithm that altered flight controls in a predetermined way upon a limited set of inputs. Airbus is proposing a vastly more ambitious solution that includes additional inputs from computer vision and a global view of the state of the aircraft.
If a relatively simple algorithm was not safe because of bad engineering decisions (or bad management incentives, whatever the case is) - then wouldn't a much more complex system be even more likely to have hard to discover corner cases and failures?
In this instance, the simplicity of the system was its down fall.
I think the use of human pilots complicates a system. You are relying on a component to the system that is susceptible to tiredness, distraction, threats, rage, revenge, self destruction, and sudden death. Complete automation would replace one extremely complex and unpredictable component, with a less complex and more predictable component.
I wouldn't get in one of these until there are better controls on this kind of software, it's not the same as autopilot.