Hacker News new | past | comments | ask | show | jobs | submit login
The Reliability Trap: The crash of Emirates flight 521 (2020) (admiralcloudberg.medium.com)
115 points by byhemechi on Dec 23, 2022 | hide | past | favorite | 81 comments



> In the minutes before flight 521 lined up to land, two flights were forced to abort their landings after encountering wind shear; however, they didn’t tell the controller their reasons for doing so, and the controller didn’t ask.

The article isn't attempting to make a point here, but for anyone who is unaware, it is this way on purpose. It's not an accident or an oversight or a bad policy.

It used to be that pilots had to justify doing a go-around. So in that critical moment, when the pilot is making the go-no-go decision, they're thinking about whether or not they can justify the go-around, not about whether or not the go-around is the right thing to do. There was a tendency for pilots to try to force a landing that just wasn't gonna happen and it resulted in more than a few disasters. So now the policy is to just do the right thing and the pilot doesn't have to explain, not to tower, not to management, not to anyone why they decided to go-around.


While you are correct, it is still pretty normal for air traffic control to ask, and it is pretty normal for the pilots to provide a reason whether they are asked or not. Wind shear is definitely information that should be shared for the benefit of other aircraft.


I've never been asked the reason for a go-around, but I've definitely offered it if I felt it might affect other aircrews. Interestingly, I have been asked the reason for a rejected takeoff, which I think is ATC policy to ask about and record all RTOs.


> But a modern airliner has so many complex automated systems that expecting a pilot to fully understand all of them is unreasonable.

Yes...

> In both cases, the pilots assumed that the autothrottle would increase thrust during a go-around, but were unaware that they had run up against an edge case where it would not.

Also yes, but...

> The common denominator between the two crashes was an overreliance on automation.

That doesn't follow. The pilots were absolutely right to assume that pressing the go-around button would indeed perform a go-around. The core issue is that the button silently failed. It's like a function that doesn't throw an exception despite failing and instead chugs along in a broken state.

It's a simple case of bad UI/UX. If a button that's supposed to do X is unable to do it, the failure should be indicated to the pilots in an obvious way.

The recommendations say as much: "The GCAA also issued recommendations intended to make the Boeing 777 a safer aircraft, including that the configuration alarm go off when the throttles are not advanced during a go-around;".


The button did not fail. Once your wheels are on the ground advancing the throttle is dangerous and requires manual throttle entry. so that is what the system did.

Take-off/go-around is a complicated maneuver involving quite a few tasks that need to be orchestrated to preform it correctly, in short you need to quickly transition the aircraft from it's landing configuration to it's take-off configuration. an ideal situation for automation. However according to the article.

"However, the TOGA switches are inhibited on landing after touchdown, because if a pilot were to accidentally press it during rollout, it could cause the plane to run off the runway. The autothrottle system remains active, but if sensors detect that there is weight on the wheels, the TOGA switches simply won’t do anything."


You missed the point. The core issue is not the button being inhibited, but being silently inhibited. The pilots were not aware that power did not increase when they expected it to do so.

Just show a bright red warning or a voice alert when the inhibition activates.


But there is an indicator, there is this huge lever that takes up most of the space in the center console. when the thrust is increased by the auto throttle the lever moves forward, With no critique to the pilots, they were in the edgeist of edge conditions, things were happening very fast, the airplane had more lift than it should have, and they acted with all due credit to the profession during the incident.

So we look to find what could have been done differently, There are many small things that perhaps could have prevented this, basicly this is the article. But one of those things was to notice that the huge lever in the center console was not moving. and their hand was right there to pull the togo switches.


The button failed. If one of its core functions can't be carried out because of the circumstances, that's fine. But it's the system's responsibility to announce this very clearly, because if a button that usually makes you go fast suddenly doesn't, you want to know.


It's weird to me that the pilots didn't notice that the engine thrust hadn't increased. I realise they have a lot to monitor and it all happened quite quickly, but it seems like not having your engines at 100% when they're supposed to be would stand out. I'd be interested to see what they would have seen and how obvious it was.


The engines don’t immediately go to 100% - it takes time for them to spin up, 5+ seconds sometimes.

The engines are also very far from the cockpit, and behind them. They hear them, but not well.

So it’s not that surprising they didn’t immediately notice the problem.

That they did notice, and relatively quickly, is a sign of how well they did. But seconds matter in this scenario, and by then the lag to spin up resulted in no significant change of thrust before impact.


There is a feedback mechanism to engine throttle that is immediate and physically obvious: The thrust levers. They will move to indicate the amount of thrust the engines are ordered to produce, even when under autothrottle control.

If a pilot doesn't pay attention to where his thrust levers are when appropriate thrust is critical to safe flight, that is pilot error.

A go-around isn't complicated enough to merit reliance on automation anyway: Max thrust (or whatever is appropriate), pitch up and climb according to ATC orders or go-around procedure at airport, landing gears up, and come back for another landing attempt.


There is a button they use on go around that turns on auto throttle up, which they hit. In the scenario they thought they were in, they don’t directly interact with the throttle levers. They were paying attention to other things, like they were supposed to.

In this scenario, the auto throttle doesn’t auto throttle up because unexpectedly, they were partially wheels down for a moment, and that disables auto throttle up.

Oops.


However the levers do physically move forward and the EPR indication should spool up when the TO/GA button is pressed. Many moons ago, one of my instrument instructors was insistent on physical verification of control status even in light(er) aircraft. You didn’t take your hand off the gear lever until you saw 3 green. On takeoff or go-around, especially go-around, you kept one hand on the throttle levers until after positive rate of climb was achieved. Had the PF here had that practice, the accident would not have occurred. That said, if the TO/GA button is unavailable in a given flight mode, a more obvious verbal annunciation of that mode should be made by the aircraft.


> On takeoff or go-around, especially go-around, you kept one hand on the throttle levers until after positive rate of climb was achieved.

On multi-engine jets, common training is to remove your hand from the throttle at V1 (takeoff decision speed). Most problems after that speed are to be taken airborne and dealt with there.


Nod, and they changed their training to make many of those changes after this accident - one of about 40 changes.

Definitely one of those ‘if the PIC had been a little more paranoid, and a bunch of other rare/weird things had not happened, it would have been fine’ type accidents. The first officer was supposed to also verify things, but the specific step wasn’t in the training either, and wasn’t normally applicable because of the TO/GA automation.

Luckily no loss of life from the crash directly. If the firefighters had listened to prior crash issues and fixed them in their own response, likely no firefighters would have died either.


In the "Children of the Magenta Line" video they tell you to hold the throttles physically in place in similar situations, but not this one exactly. They want you to hold all the controls when turning on or off autopilot, changing modes, or if you think something is going wrong (this kind of plane has multiple autopilots, so weird stuff can and has happened). I guess what you could do in this case is touch the throttles to make sure they are doing what you want, and retain the ability to grab them and push them forward if the auto-throttle system is not doing it.


Moving to high thrust is felt by the gravity vector shifting off your butt a little toward your back. There's another thing that has this effect: pitching up. The two can be really hard to distinguish.

(In fact, this idea of gravity-relative-to-butt is very strong in people and when that fails to be a useful model is often when we see accidents happen.)


Not sure about that, if there is no additional thrust the aircraft will be decelerating because it's running "uphill", pulling the gravity vector forward. It'll be balanced, so you don't "feel" acceleration or deceleration.

It's the same in a car, if you are coasting in neutral and encounter a hill, even though the car will be slowing down there is no "perceptible" longitudinal acceleration because it's caused by gravity (equally acting on the car and the occupant), just like in space or one of those zero-G flights.

There would be an increase in drag when pitching up, that would also cause deceleration.


And that's why full-motion flight simulators "work".


Diving straight down, can feel like driving forwards, which is why the https://en.wikipedia.org/wiki/Primary_flight_display is absolutely vital. One can not trust your senses when flying a plane.


On the same theme, inverted flight feels like a rapid change of pitch downwards (both accelerate you kind of toward your head), so the natural response is to "pitch back up" and get the acceleration back under your butt (which of course inverted means you actually start diving for the ground).


I think it's a "yes, and..." because it's conceivable that the alert itself could fail to notify pilots for some reason. So the pilots shouldn't rely on the automation to work 100% of the time, nor that the system will alert them to a problem with 100% accuracy.

So it's both correct to recommend that the Boeing 777 try to alert the pilots to the misconfiguration, and to reinforce pilot behavior to reduce the risk of being in a position where that misconfiguration happens.


i'd be curious to hear the reasoning that lead to "silently ignore the button press."

even basic consumer devices bark at you when a button press is invalid, and the people who work at boeing are obviously not idiots.

my guess is that there are a series of guidelines, solid reasoning or standardizations that lead to this nonsensical result and that decision-making and design process itself would be the interesting thing to understand in what went wrong here...


> the people who work at boeing are obviously not idiots

well... knowing what we know of MCAS, and their whole approach to user interaction ... that bar is very shaky.

anyway, the answer seems to be that the button has a clear feedback in the form of physical thrust set-point indicators. basically the pilot (if I remember the post correctly, the first officer) should have noticed that despite pressing the button the set-point indicators did not move.


mcas seems sort of different, in that it was a software/computer patch for a hole that was opened up in the physical handling characteristics of the aircraft as part of the retrofit to make it more efficient. it seems clear to me that it was an experiment in fusing computer with modified airframe to try and thread the needle of "no additional training required" and increase the financial performance of the program.

i suspect the story behind most decisions in flight control ui is a very long one, with a long list of lessons learned, expectations set and human factors. these things arise in any complicated design, where some new feature violates the existing design principles and a compromise is made that on its face makes no sense, but in context is totally understandable.

that context would be interesting.


sorry, I wasn't clear what I meant.

As far as I know - please correct me if write this wall of text based on inadequate information - the problem with MCAS was that they wanted to hide it, but that's not necessarily bad UX. what's bad is that they obviously failed in two distinct, but connected and together critical steps/aspects.

1) MCAS did its thing in 10 sec bursts. which was just crazy, never before seen madness. (I agree local design context is important, and I would like to know WTF was the context for this.)

2) it was undocumented because Boeing argued that it was technically just a runaway stabilizer failure, already covered by the manual/training

I would argue that the 1st failure was the (more) fundamental one, the UX one. because if the fucking thing looks like what a typical runaway stabilizer malfunction looks like, then yeah, it's "okay", pilots should be able to recognize/remember the correct remediation method for it.

...

of course it's debatable how close the new airframe + engines (without MCAS) were to the old one in terms of flight characteristics. it's possible that it really flies like an old one with too much weight in the back, and that's routine for pilots.

...

and just to be clear, I think it's just inexcusably dumb to try to cheap out on proper training.


It's constantly surprising to me how, given how airline safety has improved significantly decade after decade (see https://www.bts.gov/content/us-air-carrier-safety-data, for example), that people still have this "automation is bad" agenda. Every accident is a tragedy, and we should be getting even better, but the narrative that automation (and ironies of automation) are increasing accident rates just seems diametrically wrong.

This seems related to the deep belief in risk compensation and risk homeostasis, despite weak evidence for the former and nearly none for the latter. We can be cynical about technology all we want, but to actually improve that needs to be driven by evidence.


It's more of an attitude of "over-reliance on automation is bad, because that's how you end up dropping a perfectly good plane into the ocean when the automation turns off."

https://admiralcloudberg.medium.com/blind-to-the-problem-the...

https://admiralcloudberg.medium.com/the-long-way-down-the-cr...


It's more semi-automation. Kind of like the self driving cars testing where you put a person at the wheel who is not driving, and is expected to deal with no notice with an unexpected behaviour of the automation.


How much of that increase in safety is attributable to automation specifically rather than other causes? It seems perfectly plausible that automation could be causing problems even while overall safety is increasing.


Automation is bad because it can lead to pilot complacency. There are many pilots who wouldn't be able to fly manually, most of whom we only learn about after a mishap due to failed and/or misunderstood automation.

Automation is convenient, and in many cases (but not all!) it can making flying safer. But an aircraft with an idiot at the controls is still an unsafe aircraft, regardless how good the automation is.


I'm not sure you've made a consistent argument there. Automation is bad but in many cases makes flying safer?

MCAS was bad because Boeing and the airlines wanted to retrofit without additional training. It wasn't the automation that was bad per se.


Automation can make many flights safer, but this assumes two critical things:

1. The automation is designed and implemented in a reliable, known, safe way.

2. The pilot are competent at flying the aircraft completely manually if necessary.

If either of those two assumptions are violated, the flight is not safe.

Automation cannot replace incompetent pilots, no matter how good. Pilots can't overcome bad automation if it is not designed and implemented properly (eg: MCAS), no matter how competent.


You're missing a necessary assumption #3 - 'The pilot needs to be competent at correctly identifying and safely flying the aircraft in any mixture of automation and manual control.'

This accident was caused by a violation of #3. I've no doubt that the pilots in question were completely capable of performing these maneuvers manually - they failed at identifying which particular mixture of automation and manual control they were in, and in taking the correct steps during them.


> MCAS was bad because Boeing and the airlines wanted to retrofit without additional training.

And because it worked inconsistently due to terrible implementation.


> MCAS was bad because Boeing and the airlines wanted to retrofit without additional training. It wasn't the automation that was bad per se.

The implementation was extremely terrible (no redundancy on the inputs), and it was Boeing that decided to hide its existence and overrides from everyone. Automation isn't bad, but when implemented with criminal negligence, it can be.


Mentour pilot did a detailed video on this accident a couple months ago. He gives the insights on the accident from a pilots perspective. https://youtu.be/Im334Eg3ZLE


Folks interested in this topic might also enjoy the AA Training Video “Children of the Magenta Line”: https://youtu.be/5ESJH1NLMLs

Covers advice on when to go from maximum automation towards more manual control of the aircraft when the cockpit gets busy.


There are many accidents in aviation that show that 99% reliable automation kills. It is just reliable enough that you don’t expect it to fail, and you are not ready to handle if it does.

This is why self driving cars of today, that claim that humans should take over in case of failure are nonsense. We have a huge body of knowledge showing that when faced with a system that almost always works, humans suck at overseeing it and taking over when needed.


This case was perfectly reliable automation. The automated systems behaved exactly as designed.

The point of the article isn't that the automation just needs to be made more reliable and that will solve the problem, it's that systems like this are becoming too complicated to understand all aspects about how they work.

You would have thought overriding the TOGA switches should have sounded an immediate alarm though. It seems perfectly reasonable that a very dangerous command or input would be inhibited by the automatic systems, but it seems completely crazy that any command or input would ever be inhibited silently. I'm actually flabbergasted that this is not a fundamental rule of aircraft control systems design at Boeing.


Another poster pointed out that the alarm might also fail to sound, or maybe the switch didn’t actually work, so you need to check it anyway.

On Boeing planes, many automations such as TOGO typically have a corresponding physical counterpart - in this case, autothrottle slides the throttle controls upwards to indicate that the engines were asked to go to full power. There’s also the engine instruments which show the engines reaction to the requested throttle level.

It’s not hard to know that it didn’t set the throttle to full power, but the pilots didn’t check or didn’t know to check. An alarm would help, but would also be another fallible automation to be relied upon.


I appreciate there would be indicators to confirm most things, and procedure probably says those checks should be done and alarms should not be relied upon.

I still think it's crazy that there aren't alarms for any situation where the airplane disregards an input like this.


And invariably: that 1% triggers when you need it to work the most.


I am impressed by the reaction of the passengers. If I was in a plane full of smoke and I saw fire, and the crew wouldn't let me out because they are waiting for an evacuation order from a captain that might be dead, I don't think I would have waited and complied. Smoke kills you way before fire reaches you.


Reaction: The problem is neither automation per se, nor heavy reliance upon automation. The problem is 666-layers-of-intricately-interlinked-shit automation, which no mere human pilot has any chance in hell of understanding the behavior of, in real time, when it is really critical that he do so.

Idea: Add a few "Robocide" buttons to the cockpit. If pressed, they deliver a figurative bullet to the brains of the autopilot, dropping the plane into a far simpler "full manual flying mode". Pilots regularly train in doing that, and flying the plane when suddenly dropped to manual.


> Pilots regularly train in doing that, and flying the plane when suddenly dropped to manual.

I wish...

In fact this is how AF447 crashed... a plane under direct control of pilots...

Read: https://www.vanityfair.com/news/business/2014/10/air-france-...


Though, AF447 isn't a simple case of the plane being under direct control. When the two pilots give conflicting inputs, Airbus silently averages out the two inputs. The pilots were confused as to why their inputs weren't seeming to work, then the pilot who was trying to do the wrong thing (pull up) figured out why his inputs weren't having any effect and hit the button to silently ignore the input from the pilot who was trying to do the right thing (nose down). As I remember, it was the more experienced pilot (captain) who was correctly trying to get the nose down, but it seems he never figured out why the plane wasn't responding to his inputs.

Also, when they went very far into stall, the stall warning disabled under the assumption it was a sensor problem rather than the airplane actually getting into that bad of a stall. So, paradoxically, starting to improve the situation caused the stall warning to come on, while doing the wrong thing caused the warning to go away.

Both pilots were trying to debug with information intentionally being hidden from them. (Honestly, the stick should vibrate or something if your inputs are being ignored or counter-commanded by the other pilot. Better yet, mechanically link the controls as in Boeing planes... it's not great that the stronger pilot wins, but at least both know what's going on.) Granted, there was a lot of pilot error in AF447, but there were multiple user interface issues that greatly contributed to the problem.

Edit: Also, as I remember, the start of the incident was that the pitot tubes iced up and the autopilot disengaged itself because it had no idea what to do. It's hard to point to a case where the automation has explicitly given up as a case where we should rely on more automation. Clearly a world where the automation was better would have been better, but just letting the autopilot do its thing wasn't an option. The autopilot disengaged itself.


My interpretation is AF447 had one pilot refusing to let go of bad assumptions, exhibiting extremely poor stick and rudder skills, and going rogue with the controls. It's a scenario of one individual exercising profoundly bad judgement, further enabled by his peer not forcefully taking over.

Loss of pitot tubes only implies loss of air velocity indicator. The attitude indicator, the altimeter and the thrust levers worked fine. I still can't believe they intentionally reduced thrust and pitched up for an extended amount of time, and not only thought this was a good idea, but didn't crosscheck the most fundamental instruments to confirm that the plane was flying level. And then proceeded to ignore stall warnings and stick shaker.

The human error was so severe that I'm not convinced that better alarms could substantially mitigate. At the end of the day, if a pilot forgets how to fly a plane, then their peer (presumably still capable of independent thought) needs to have the presence of mind to take over.


Ohh, yes. Reading it, part of me wanted there to be a "power down cockpit, transfer all control to a real pilot back at Air France HQ, via satellite data link" override system. Or at least an extremely loud robo-voice blaring "Get these clueless dumbf*cks out of my cockpit, and get a REAL pilot in here" through the entire plane.


Sadly, there was a real pilot on the plane - he was on his rest period when the incident started. He had come back to the cockpit and correctly diagnosed the issue, but was too late to save the plane.


I've always thought the stick linkage thing was a bit of a red herring. See here for more info: https://aviation.stackexchange.com/questions/14027/sidestick...


>Better yet, mechanically link the controls as in Boeing planes... it's not great that the stronger pilot wins, but at least both know what's going on.

Not necessarily. They are mechanically linked but there is a breakaway mechanisms designed to be failsafe against one of the sticks jamming. You could have a scenario of them fighting each other so hard the breakaway trips and then only one of them has a working stick, at least until both sticks are aligned, allowing the clutches to engage again.


Add a rule - pilots (or cockpit crews) who can't cope with manually flying the airplane don't get to fly.

(From Vanity Fair, it sounds like the Air France pilot's union is both extremely powerful, and profoundly hostile to the idea that pilots must actually be competent. Also like opaque layers of both poorly-understood automation and infernally-clever instruments repeatedly got in the way of the pilots doing plausible things to "manually" recover.)


Thank you for the article, an amazing read ill fwd to many


These exist. Just pull the circuit breakers for the flight computers. The systems are isolated enough you can remove flight protections without losing flight controls.

See https://en.m.wikipedia.org/wiki/Qantas_Flight_72 for a flight computer failure. The Mayday episode lists states some time after that accident, another Qantas flight had the same failure, but the crew knew of the potential issue, so they cut power to all 3 flight computers.


In any case, I'll be wearing my airbag-suit during my next flight.


This article comes across as an attempt to frame the narrative in order to exonerate Boeing for what seems like poor quality automation, not at all dissimilar from the MCAS crashes.

Other similarities are poor and incomplete documentation and/or withholding of crucial technical information.


What seems to link a lot of these cases is that the pilots tell the plane to do something and the plane decides it knows better. Throw up all the warning you want, but a computer shouldn't be able to override the pilot's decisions.


Some overrides make sense. Airbus’s flight envelope protection prevents pilots from stalling the plane. A great many accidents have happened due to stalls.

Airplanes have never been safer, despite many more planes flying. Crashes of airliners is rare. If anything, that points to automation greatly improving safety.

Thing of it this way, many more pilots accidentally hit the TOGA button on the ground. The automation has surely prevented more accidents than it has caused.


> Airbus’s flight envelope protection prevents pilots from stalling the plane.

To be fair, that's what Boeing's MCAS was also trying to do. It's just that Airbus aren't incompetent and/or criminally negligent, and don't hide the existence of such automations.


MCAS wasn’t intended as a safety system. It was designed to make the Max behave like previous 737 models.


It was designed to avoid stalling in conditions under which the previous 737 wouldn't, and thus to make the Max behave the same.


That doesn’t make it a safety system. It wasn’t integrated as part of any kind of flight envelope protection and it wasn’t necessary to fly a plane of that design safely.

The issue was they decided to use automation to avoid training pilots on how the plane actually behaves. That kills people.


But we can have both. Make the button yell at you if the circumstances make it dangerous to use it. The fact that they didn't know is a failure of the system, which should've let them know immediately that the button didn't actually do the thing.


I agree but I also wonder if there are good reasons more alarms aren’t used.

Far worse than the occasional silent failure are too many false alarms. The medical profession has this problem.

If you get too many alerts, people ignore them. There was a series of incidents where pilots were routinely pulling circuit breakers to silence takeoff config warnings going off while taxiing. Eventually a plane took off without flaps and a lot of people were killed.

Automation is very complex and the solutions aren’t always so straight forward. People forget there is a serious risk of adding an alarm causing more accidents.

Should there be an alarm here? Seems obvious there should be and there better be a damned good reason there isn’t.


That's a fair point. I guess attacking the problem from another angle helps then - simplifying the computer UI and minimising the number of situations where an alarm could be warranted in the first place.

But yes, this stuff is very complex and I'm (we're?) only judging this from the outside.


I wish there was a way to emergency lock overhead compartments. In the video several passengers can be seen going for luggage, when it was about life or death..


the simple solution would be to tell the passengers that in case of an evacuation there's insurance against material losses, and anyone holding up others will be prosecuted.


They are told that, but unawareness of the danger (No mass panic) makes them do the usual debark dance. Just dont offer them access to the luggage.


no, they are just told to "leave it" at least in the usual safety dance that is always done by the crew has absolutely no mention of sticks and carrots - at least in the EU

unless it has changed in the last few months, or if I somehow missed it


It remains absurd to me that companies making aircraft can include what to me sound like basic safety equipment paid extras. The fact that they had the ability to detect tail- or head-windshear, but charged extra for both is absurd, even if in this case it apparently wouldn’t have changed anything.


This guy's extensive, numerous analyses of plane crashes are downright excellent and deeply detailed like few other texts about the subject that I've ever found online. If he makes mistakes here and there, or attributes blame somewhat incorrectly on occasion, it's a minor issue considering the sheer scope of what he's written.

Many of the criticisms in the comments below that I've seen are deeply uncharitable in a way that's so typical of this site, where man y people post dense opinions with all kinds of viewpoints that mis-attribute blame and say mistaken things for all sorts of reasons on numerous subjects.

How ridiculously typical of the self-congratulatory audience on HN. Have none of you ever written code that has a flaw or two?


The TOGA buttons are usually on the side of the thrust levers. It looks like the reverse levers in the photo are labeled TOGA.

The engines were at idle during an extended float; so it could take longer to spool up. The crew knew this, but that's five really important seconds.

Definitely should have waited for the kick in the pants before raising the gear.

In a heavy aircraft, things happen slower. The airspeed will decay slowly after raising the nose, but you will see some initial climb until gravity has time to assert itself. Increasing headwind will introduce transient increased airspeed masking the decaying energy situation.

I wonder why spoilers were not used, but don't know how that fits in with 777 procedures.


Not sure how much it ads to this write-up, but I remember seeing this incident broken down in a pretty good youtube video by Mentour Pilot.

https://www.youtube.com/watch?v=Im334Eg3ZLE


>that they don’t fully understand

This is it. Humans can understand any system, if you have pilots failing to understand they weren’t trained properly. If you can’t train your pilots properly, use different planes. Nearly every plane crash for decades has been a human failure not a technological one. The way airliners behave is well known and not a mystery at all, if pilots don’t know what to expect its their fault.

Designing aircraft for human training needs is indeed important and constantly under review, but I cannot really blame aircraft for the failures of airline training programs.


Noob question: why did they stow away landing gear so early and did not wait at least until some altitude gain?


The landing gear creates very high air resistances. Certain maneuvers are impossible to do with it down, you can't fly very fast, ridiculously high fuel consumption, et cetera.


it was basically bad UX :(

a nice airframe and a complete shitshow of software and interaction with the machine ensemble


Similarly, we have amazingly powerful and reliable computer hardware, but continue to have software that is extremely slow and buggy and hard to use.

I theorize that since software is so invisible/intangible, these complexities are not understood by many.


Modal interfaces kill again.


We must kill Vim.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: