Yeah I don't disagree. I don't fly, but have sort of been following this out of curiosity. One comment I've seen from some pilots is basically "ehh, runaway trim isn't a new thing, we train for it in the sim, and there's a standard way to deal with it (disengage the automatic system and trim manually)."
So perhaps Boeing felt that this didn't really change anything in that regard. They seem to have been proven wrong.
Technically MCAS activation isn't runaway stabilizer trim, and I'm sure that tripped up the Lion Air pilots. Check out the quick reference for a runaway stabilizer[1]. Note the steps:
* Control airplane pitch attitude manually with control column and main electric trim as needed
* If the runaway stops, stop.
Well, electric trim input on the yoke will stop MCAS temporarily. Of course if these guys had any experience on an NG they're already used to the computers trimming the plane in a counterintuitive manner via the speed trim system (STS). So not only is MCAS not a runaway trim situation, but pilots flying the NG will get used to the computer trimming the stabilizer "at random".
This sort of MCAS failure presents itself to the crew as a trim runaway, and the instructions in the document continue:
4 If the runaway continues:
STAB TRIM CUTOUT switches (both) . . CUTOUT
If the runaway continues:
Stabilizer trim wheel . . . . . Grasp and hold
- - - - - - - - - - - - - - - - - - - - - - - - - - - - -
5 Stabilizer . . . . . . . . . . . . Trim manually
(That bit about grasping and holding the trim wheel was a surprise to me.)
This is what the Lion Air crew of the flight before the one that crashed did, in response to the same sort of MCAS failure, and completed the flight without further trouble. While Boeing made a serious error in hiding the differences between this variant and its predecessors, it seems that the prior trim runaway procedure does work for this sort of MCAS failure (unless evidence to the contrary comes out of the Ethiopian Airways investigation.)
Boeing is "right" in the sense that, if you follow the normal 737 flight manual, you'll be fine, even with MCAS enable or even MCAS misbehaving due to the sensor readings being bad or whatever.
I think the problem is that the symptoms are different from other 737s, and if you don't KNOW about the MCAS stuff, that's very surprising. So as humans, even though logically the same procedure applies, instinctively you don't think to do it. And given the very short time frames involved, and the margin of error, it's not too surprising that these events have occurred.
Perhaps the need-to-retrain criteria needs to include the following amendment:
If the craft exhibits new symptoms, even if the response to those symptoms are covered correctly be the old operations manual, pilots will need to be retrained to account for the new symptoms.
No simulator accounted for a borked AoA sensor being able to nosedive the plane, or the altered aerodynamics from the different engine configuration.
So while yes, runaway trim is a thing, the circumstances under which an error could happen are substantially different.
For instance, a pilot could check the maintenance log book and see some work or inspection was done on the auto-trim subsystem. This would prime the pilot to be more on the alert for the possibility of trim misbehavior during the flight. Crisis averted, right?
But no pilot would look at an entry for an AoA sensor being off or worked on, and would think, "Boy, better look out for that safety system I never knew existed that could cause a trim runaway because my previously airworthyness agnostic AoA sensor is unreliable."
Flying and complex system diagnosis in time sensitive conditions requires extensive ahead of time mental model creation ahead of time.
It is weird, though, that this was not caught in simulations (not training simulations, but in design simulations.) I would assume that running simulations to see how the design behaves when sensors fail would be an absolutely standard thing to do. I don't work on airplanes, but we routinely simulate sensor failures and their effect on software systems and you'd think this would not be optional for passenger aircraft certification.
When all of your testing is in house, and you don't have people who are willing to put the project at jeopardy for the sake of doing it right, you'll be amazed what can get overlooked.
If anything, the process of engineering is most difficult in that answering most questions tends to be straightforward (not easy, but straightforward once you know the right methods to apply), while figuring out if you've asked all the right questions is the thing that keeps me awake at night. If you don't ask and dig in until you've answered it fully, it's easy to get blind sided.
I have difficulty believing it could have gone so horribly wrong now, but I can't deny that just from the information available, even if MCAS isn't the root cause of the Ethiopian Air crash, there is some egregious failures in sound practice going on with Boeing to have slipped up this badly.
That's the thing with sound Engineering, when you do it, it just verks. When you don't...
If you only have two sensors and you're taking input from both sensors, assuming each sensor has the same probability of malfunction then you're doubling the chance of processing bad input.
You could choose to select the input that results in the least-worst bad outcome. But that may not bring the total risk below the increased risk of utilizing both sensors. The sensors are doing something purposeful, after all.
So it may be logical what they're doing. OTOH, the logic may make sense only in a path dependent sort of way, after a series of bad decisions that put them in a corner with poor options. There probably really should be 3 or more sensors.
You take input from both sensors only when they don't disagree (by a margin). There may be failure modes where both have the same bad input but I don't think that this doubles your chances of processing bad input.
You have double the chance that one of the sensors is bad. Which one do you choose? The one that says AoA is normal? Maybe that's the one that is bad. Worse, maybe such a bad input is more likely in a stall scenario.
Without any ability to differentiate bad from good input, you may be better off simply relying on a single sensor at any one time and completely ignoring the other, at least as far as controlling MCAS is concerned. Tossing up a warning that says the sensors disagree is another matter.
I agree that one can argue that this doubles the chance of system shutdown compared to properly detecting that one sensor fails. That's obviously only ok if the system is expendable to a certain degree.
So perhaps Boeing felt that this didn't really change anything in that regard. They seem to have been proven wrong.