Hacker News new | past | comments | ask | show | jobs | submit login

The more I read about flight systems and protocols (and I am absolutely a layman when it comes to this), the more it seems like it's very rarely solely isolated to one component.

The hardware, software, and human systems are so intertwined that it likely involves all 3, even if the route cause can be isolated to one.

That being said, there hasn't been much specific information about the cause released yet, that I've heard.




Problem is, if they've added so much automation that trained pilots cannot determination the proper course of action 100% of the time, the system is at fault.

"You don't have to do anything, the plane will fly itself. Unless there's a catastrophic emergency. Then you better remember everything you haven't practiced from 18 months ago" seems like a failed implementation.


That's definitely a mischaracterization of what the airlines do. Anyone that has been in a cockpit of a plane knows that you fly by checklists.

There's a checklist procedure for almost any scenario they will run into (of course not every). This exact issue was seen by other airlines and the pilots followed the checklist procedures to safely regain control of the plane as expected.

In theory, these checklists are optimized to resolve these issues and regain control as quickly as possible while ruling out other causes. It is very rare the correct course of action for the pilot differs from the checklist procedure.

There is 0 expectation that the pilot should remember everything. Pilots are trained specifically to communicate with each other to go through these checklists as quickly as possible.

That being said, there is a major concern that this issue will popup while taking off and being too low to the ground to properly follow procedure in time to recover control of the aircraft.


Chicken and egg problem. How does the pilot know that the automation is malfunctioning? The pilot has to go through their mental checklist and make the realization that intervention is necessary to prevent catastrophic results. All this while in critical take-off situation.

Apparently, the plane thought all was well, just needed to point the nose of the plane down a wee bit.


And yet it is precisely because Captain Sullenberger did not follow protocol, in the moment, that he was able to save the lives of everyone aboard flight 1549. It was only determined afterwards (obviously) that he made the right call.


Many pilots in similar situations would have made the wrong decision. As a passenger you are not necessarily going to get someone of Sullenberger's quality. And it is possible that automation could help in this kind of situation. It could provide an estimate of glide distance. It could use spatial data to identify crash landing sites, avoid populated areas, and design an optimal landing profile. All in a fraction of a second. Of course this kind of failure is so unusual that it is probably not worth designing the automation to deal with it.


I mean, initially pilots were not informed this system existed. Certainly the assumptions that went into that decision seem to match up with what the person above you is describing.


In theory (not saying I agree), the "regain control checklist" is very similar before and after this change which is apart of why they did not see a need to communicate this until after the 1st crash.

Reviewing the video below - it appears to still line up with this. He doesn't mention the actual memory items changing. His explanation is the pilots starting using the wrong memory items because of information overload.

Example - They could have been going through the stall memory items instead of the runaway vertical stabilizer memory items.


That appears to be contradicted in this video which was linked above, around the 8:45 minute mark, it's a different set of memory items ("Runaway stabilizer") which should be enacted in the case this system was coming into force outside of a stall situation.

https://www.youtube.com/watch?v=zfQW0upkVus


The problem is information overload to pilots. And the pipeline how commercial pilots are trained also changed. 50 years ago a lot of pilots were having military backgrounds and training. So they were having more experience with shit going down the drain situations.


50 years ago they also crashed about 100 times as often as today.

(Air Traffic increased ten-fold since 1970, while fatalities went from 3,500 pa to a few hundred)


That is literally how Tesla and some otber players started car automation. With the same predictable results. A human just cannot stand on standby in perpetum. Either the human must be in control or out of the loop.


Standng by in a Tesla and standing by in an airplane are two very different things. You cannot compare them.

When a typical civilian passenger plane throws everything up and yields control to its pilot, the pilot gets 10+ minutes to fix it, helped by a copilot, mountains of checklists and a direct audio line to air traffic control.

Nothing to do with the 5s you maybe get when your Tesla yields.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: