Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My gut tells me that the number of fatal incidents that would be prevented by complete "takeoff to landing" automation vastly exceeds the number of incidents prevented by a human correcting a computer or recovering from a failed instrument or system.

I'm willing to bet that for every "miracle on the Hudson" there are ten or more "the pilots should have just let go of the controls and let the computer handle things, or trusted their artificial horizon after getting disoriented".

The number of FAA accident investigations that mention spatial disorientation is too great to be otherwise.



Autopilots disconnect all the time. They can have a pretty low threshold for the conditions they’ll tolerate. Autoland can handle moderate surface winds, but pilots can handle much more. Turbulence and autopilot function is more variable. Some autopilots disconnect prematurely, others need intervention. Pilots train for these idiosyncrasies.

The computer needs reliable data. When the data is faulty, the logic fails. This happened with AF 447. One annoying aspect of that event for me is the obtuse ways the system communicated what alternate law was in effect. Two of three pilots became confused by the computers’ confusion. The computer itself had given up. The plane stalled all the way into the ocean, one of the more simple conditions to recognize and recover from. The captain quicky recognized the condition and solution, but he had arrived in the cockpit too late to compel corrective action.


Then you have the MAX crash, where the automation was causing the problem, and the solution was to remember to hit the stab trim cutoff switch.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: