The FAA suffered a major loss of credibility in their handling of the first round of testing/accreditation of the MAX.
The FAA can't afford another stuff up this second time around and as such I suspect they will be checking every aspect of the plane in very fine detail.
I still don't trust the FAA at all. They've been sourcing out their own work to Boeing since the 787 Dreamliner [0]. That's seven years ago. I don't think this will make them change their practices. The 787 fiasco certainly didn't.
Personally I'd be happiest if the world just scrapped all the Max-es. Just scrap 'em, let Boeing go out of business, it's what they deserve now, they shouldn't exist anymore as a company. There are enough businesses that happily create planes without putting profit before people.
It's really too bad that there's no footage of either of the crashes, because if people could see what happened they would never fly one of those things again. That's why people don't fly zeppelins anymore. Not because of what people said or read, but because they saw. The Max deserves the same.
Do you want to fly in a plane which was created using the deaths of over 300 people to finalize the design?
It's not footage but that angle of attack is absolutely nuts. I don't think my brain would have been able to process that kind of image out my window if I was a passenger on that plane.
Not to mention the AoA data was suspect, at least from one of the sensors. That's what caused the chain of errors leading to the MCAS nosediving the plane.
It's especially important, while seeing that sequence starting around 5:20, both the crazy scary angle and the speed of it, to remember the way people defended Boeing and said the pilots could have dealt with it using arguments along the lines of "if they took a step back and though rationally" or "if they read the full procedure manual they have access to" or "if they connected the dots and realized it was similar to other system X".
Those people, pilots included, were thrown to the grounds.
> I still don't trust the FAA at all. They've been sourcing out their own work to Boeing since the 787 Dreamliner
I don't think the main issue is outsourcing the engineering work out.
The bigger thing is they were outsourcing their management and reporting to Boeing as well. That meant rather than independent overview, Boeing managers were able to pressure shortcuts and cost cutting and could threaten engineers jobs who didn't toe the line.
The biggest problem is not Boeing sourcing out its work, it's the FAA sourcing out its design certification work to Boeing.
Boeing now designs the planes, and certifies them for safe design. That's what's nuts. In the documentary I posted above that problem was already identified, and that was seven years ago. Nothing has changed, Boeing still certifies its own planes and the FAA just signs off on them.
>because if people could see what happened they would never fly one of those things again.
They might not anyway.
As it is, people are irrationally afraid of flying. Plane crashes are a fetishized "nightmare scenario". Multiple crashes linked to the same plane (or company) will, likely, have terrible business consequences.
Well we can hope, can't we? As long as no exec at all is going to go to jail for this, I say let Boeing die. It's the least we can still do for those 300+ victims and their families.
« Let them die, or at least never use them for passenger travel anymore » is what they said for McDonnell Douglas (DC-9, DC-10 and MD-11 must represent half of Air Crash Investigation episodes), the company lost a lot of value.
Maybe there will be others that are happy to give it a go. Maybe it's best to have a monopoly, and forever get rid of rushing planes to market to beat the competition. Maybe in aviation, that would be a good thing, who knows.
I think overall we should consider this process a very positive thing, FAA and Boeing (and Airbus as well) are all going to learn a lot from this process which will make air travel safer in the future. It's unfortunate that it took two tragedies to get us here but if you look at the history of aviation improvements this isn't the first and it won't be the last time.
The reason people are outraged is that "don't use a single sensor to decide that the plane should perform an uncontrollable nose-dive" isn't the outcome of new learning. That wasn't competent airworthiness seventy years ago. It's a lesson the industry thought it had already internalized. We shouldn't applaud mistakes of the distant part being repeated recklessly as some kind of improvement.
I wouldn't say the 737 MAX is nothing like the 737 NG, they're quite a lot alike in important ways, but there are clearly significant differences. And there should be an aggressively adversarial system to determine whether the difference is significant enough to warrant a separate type certification for the MAX variant.
The only difference between these two descriptions is whether the FAA implicitly understood that the MAX is NOT the same as the 737. If yes, then they were talked into accepting it anyway. If no, then they were tricked (unless Boeing genuinely believed that there was no difference, which is clearly false.)
The kind of people who work at the FAA jobs where those decision are made know enough about planes to realize that a different engine placement is not without effect. That alone means they should at least have done extensive testing internally, which they didn't since it was pretty much clear they used the older 737 simulators / didn't have 737 MAX simulators.
This is a very common phenomena that as Quality Assurance you specifically have to work around when talking to regulators, and there are in fact, white collar cultural norms in place which shape the way most corporations I've worked at talk about potential regulatory risks.
For one thing, if you bring up any questions regarding an issue at all, the preferred medium of communicating it until a decision is made tends to be direct person to person with no electronic paper trail. (This insulates the company from getting caught out by E-Discovery in case of legal troubles).
Secondly, there's the practice I call anchoring. This is where you go to the regulator with a very narrow, refined question looking for an answer, explain why you think it's a good idea, and hope they don't think too much into it before saying yes you can do it. If it works, you have a document stating the regulator told you it was okay when you asked. To prevent this, you have to go to your regulator and give them a high and wide view.
A) What is the problem
B) What are the relevant regulations impacted
C) What additional risks are created with both answers
Businesses can be a bit squirmy when engineering folks start getting too chummy with compliance. I know I've found a good one when I don't get funny looks when asking if something questionable sounding has been run passed Compliance.
Everything I've read in this investigation reeks of anchoring toward the FAA. Especially given the whistleblower testimony as reported in the 60 Minutes Expose on the MAX.
What seems to have happened is that the system was originally designed with more conservative parameters. Testing showed that these parameters were insufficient to prevent a stall so they "turned it up to 11", and didn't re-evaluate the original design.
It was that they found similar behavior happening in a low-speed part of the flight envelope. That required increasing the authority and losing the G-load safeguard.
This isn't some previously undiscovered issue in aviation. This is plain and simple Boeing prioritizing cost savings over human lives. It is as simple as that. There are no technical takeaways to be had from the two tragedies that took place. The only takeaway is that Boeing betrayed everyone but their pockets.
I would prefer that Boeing be put out of business, rather than "learning a lot from this process". I think that the word "unfortunate" is too conservative to cover this situation.
A corporation is not a person and punishment isn't always the answer, but we are looking at a lot of deaths and this new issue sounds like it could have caused even more. If the FAA had missed this one we could have been looking at another crash. Maybe putting Boeing out of business would serve as a cautionary tale to the remaining companies in this space that they should be making more conservative choices when it comes to development of a product that has the potential to injure or kill such a large number of people.
As it is, I'm not sure what the moral of this story is for Boeing. Is it that the company is "too big to fail"? It is that they need to tough out the bad press to really be successful?
I worry that maybe it's something along the lines of "you have to crack a few eggs to make an omelette."
Boeing is never going to be "put out of business." They are too important to the US economy and US national defense for that to ever happen.
What could happen (but still unlikely) is that those within the company who made the decisions that led to the deaths of 300 people could be held to account in a court of law.
> What could happen (but still unlikely) is that those within the company who made the decisions that led to the deaths of 300 people could be held to account in a court of law.
Sorry, I laughed. That will never happen. Boeing may go under, but those rich bastards will always be able to stay out of jail. Name me one exec of a plane manufacturing company that has gone to jail over a crash. There aren't any.
> FAA and Boeing (and Airbus as well) are all going to learn a lot from this process which will make air travel safer in the future
Yeah, right. Until they start cutting corners again for economic reasons. There's absolutely no change in any of those organizations driven by this incident or by an increase in standards or by safety, there are no new regulations, it's all a dog and pony show driven by money and saving their asses.
"if you look at the history of aviation improvements this isn't the first and it won't be the last time"
And that's a great argument for just letting the next horrible accident happen in a few years. Until people start demanding that at least the goal of those organizations should be Total and Complete Flight Safety, the FAA and Boeing will cut corners because "accidents will always happen, this isn't the first and it won't be the last, so who cares". Accidents will stay a calculated risk for them, in stead of something that just should never ever happen.
>There's absolutely no change in any of those organizations driven by this incident or by an increase in standards or by safety, there are no new regulations
How fast do you think this stuff can happen?
>And that's a great argument for just letting the next horrible accident happen in a few years.
The next horrible accident is going to happen in a few years, sorry to tell you.
>Accidents will stay a calculated risk for them, in stead of something that just should never ever happen.
I'm pretty sure what you're trying to get across is that Engineering is the business of enabling the taking of calculated risks, but you're doing it in a needlessly inflammatory way.
And the poster you're responding to isn't necessarily wrong either. Just because engineering allows us to identify, analyze, and calculate the impact of risks does not free us of an obligation to always favor the most conservative approach in living up to the obligation to comport ourselves in a manner consistent with serving the Public interest.
Above I posted a seven year old documentary about the 787 Dreamliner, and there were the same problems of Boeing doing the FAA's work. So to answer your question: I don't really know, but it seems to be decades at least before anything changes, if at all. And that's really appalling.
>I'm going to assume you're no engineer.
I am, actually.
As an engineer you can calculate a risk like: this part under this type of load will fail at x, and then design so that x is never reached (or in a million years or so) to make a design safe.
A manager can calculate risk like: if we sell 5k planes, and we can make them $1000 cheaper by using a part that will increase the likelihood of a crash by only 2%, since the chance of a crash is already very very low, we might as well use the inferior part and make $5m extra. That way I can buy that third mansion I want.
Both are calculating risk. The engineer calculates for safety, the manager for his stock options. Which do you think is better?
My point is, the goal should always be complete safety. Even though engineers know that that is impossible, and even though lots of managers don't care at all about safety. We should still strive for it. Total flight safety should be the goal, not a byproduct of good engineering.
That's why we have the FAA in the first place, because we don't trust those managers at Boeing to make the right decisions, because experience learned us we shouldn't.
We trusted on the FAA and Boeing to hold that goal of passenger safety to the highest standards, but they betrayed that trust completely by letting Boeing do the FAA's work! Isn't that completely bonkers? Perhaps we just should scrap the FAA as well and let the Europeans certify all planes. I don't know. But right now there's nothing but ass-saving going on, and that doesn't restore trust in either of those organizations.
Lot's of people should be going to jail for this, but they wont, because they're rich and can buy their way out. Everybody knows this. That's the way the United States works. Justice is only for the rich and powerful. That's how Boeing can kill 300+ people and get away with it. As an engineer, this worries me greatly.
Me too. I know the 787 is choke full of design and construction errors as well, but am curious about the others. We'll have to wait on any model to start falling out of the sky regularly to get an answer though. Because as long as it doesn't fail they're sure as hell not going to check. That's how you design planes at Boeing see? You half ass a plane, and then let the public beta test it.
Is this why third world state airlines seem to be first in line to test the new models (Ethiopia etc). They get an extra good deal on them. Makes me sick really, that this is the world we are in.
Asking because I legitimately don't know: was the FAA failure here more attributable to the agency, or the funding (and therefore manpower*expertise) available to it?
Many articles about the 737 Max crashes have been talking about how it’s largely a systemic failure of the process due to the US culture of deregulation. That would make this both attributable to the agency and to the funding in a way that is complex and hard to separate. As I understand it, the agency has been allowing airplane manufacturers to essentially do their own testing more and more over time, despite the obvious conflict of interest.
Came here with the same thought. It's not just the culture at Boeing. We see this everyday in US on different levels of administration. This country is simply owned by corporations and while deregulation was great for the progress which built it, it's not ok anymore.
I think that attitude is part of the problem. If you're competent, it's usually much easier to find a new job than it is to take on entrenched cultural and funding problems that create risks. This further entrenches the problems by driving out dissenters, and if you work for a regulator, the 'creative destruction' of the free market can't fix it either.
While all true, by quitting you escalate the problem.
If upper layers "solve" it by hiring less competent people, then that is something on their conscience.
When the problem is at the top, just bearing with it at the bottom will not help at all. Even if you manage to save lives, you virtually sacrifice your own.
The FAA suffered a major loss of credibility in their handling of the first round of testing/accreditation of the MAX.
The FAA can't afford another stuff up this second time...
Because I'll just check with the next air transit regulatory body right?
The FAA and EASA have an agreement of trust regarding type certificates issued by the other party[0] (i.e., if the FAA certifies an aircraft, EASA will allow that aircraft to fly in Europe without performing its own technical certification process).
This agreement only works as long as both parties trust each other's certification process.
The fact that they disagree is what is notable. The FAA lost credibility by being the last to ground the MAX. You side with the one that has a history of not being wrong.
That makes me doubt the severity of this issue slightly. Maybe the FAA is pointing out a relatively minor issue to rebuild trust, and Boeing isn't in the position to disagree, or even say it's minor, so they're acquiescing.
Yeah I wouldn't be surprised if this was an issue somebody knew about but with the recent attention they're forced to disclose it before somebody else discovers it.
There are a myriad of issues that put the planes at risk, but I think that the fact that when the control system (MCAS) is engaged causes it to ignore feedback is the biggest issue of all.
"But with the MCAS activated, said Fehrm, those breakout switches wouldn’t work. MCAS assumes the yoke is already aggressively pulled back and won’t allow further pullback to counter its action, which is to hold the nose down.
Fehrm’s analysis is confirmed in the instructions Boeing sent to pilots last weekend. The bulletin sent to American Airlines pilots emphasizes that pulling back the control column will not stop the action.
Fehrm said that the Lion Air pilots would have trained on 737 simulators and would have learned over many years of experience that pulling back on the yoke stops any automatic tail maneuvers pushing the nose down." [0].
If you bought a new computer, how pissed off would you be if you lost data not because of a hard-drive failure, but because of a weird design decision of the 1 penny caps lock key? Imagine spending the time to setup a proper RAID system and losing everything because of a design decision in the keyboard.
I mean if the media keeps reporting about the small stuff that's wrong, it's going to make people go "well planes are complex and things happen" and almost ignore the seriousness of a design decision that ignores user input.
To be fair, ignoring user input could have potentially have saved Air France 447... I mean I actually can't think of an automated fool proof system that would've fixed 447, but incorrect input was a major factor.
IIRC, the cockpit voice recording included a comment from one of the co-pilots about how pulling back on the yoke couldn't cause a stall. The assumption was that the Airbus's fly-by-wire system would prevent it and ensure the aircraft still climbed as long as the pilot held back on the stick.
The co-pilot apparently didn't realize that the sensor issue that disabled the autopilot also disabled the stall prevention. And that's despite an audible "STALL" warning being repeated in the background.
The captain was not in the cockpit when the whole situation started, but as he re-entered the cockpit during the stall he saw one of the co-pilots holding back on the yoke and told him to push the yoke forward to prevent the stall. The co-pilot followed the instructions, but only for a few seconds before pulling the yoke back again.
All of this is to say if the plane hadn't been known to ignore user inputs in most situations, the co-pilot might not have assumed the Airbus would do the right thing and climb no matter what when pulling back on the yoke. So in a sense, never ignoring user inputs might have also saved Air France 447.
Apparently (think I read in the Langewiesche feature) the plane ended up in such a deep stall that the flight control software started ignoring the AoA sensor data (as implausible) and the STALL warning stopped. But when the co-pilot stopped pulling back on the stick, the AoA decreased, and the STALL warning sounded again.
This might have convinced him that easing off on the stick was actually causing the stall, which was tragically misguided.
Exactly - the computer had switched contexts, but the pilot hadn't. And expecting pilots to switch their mental map of expected behaviour when the computer does (and did so with, from the accounts I read, very minimal indication that it had done so) during a high stress situation, is asking for trouble.
This is my number one objection to over-reliance on automation.
Every piece of software is a mechanism. In order to truly be able to safely use something without outside aid, one must have a complete mental map of the mechanics of the system in question. Abstraction helps; but not when you start getting into high-risk contexts.
The copilot pulling the yoke back continued to do so, long after the other, much more experienced, copilot had formally assumed control and had attempted to bring the nose back down by pushing the yoke forward. Ultimately the inexperienced copilot fighting against his more experienced superior was what doomed the airplane. Both the senior copilot and the captain immediately identified the problem and attempted to take the correct action.
This is not a problem with how the system works, since this behaviour is explicitely communicated to pilots. It even says right on the instrument panel what control law the plane is in. There are only a handful of control laws and the differences aren't that complex. Anyone with sufficient experience in flying Airbus products knows this.
I don't know a whole lot about this, but I seem to remember that there was one design decision, that, while not wrong, was different from the generation before, and that is that the airplane yokes were not mechanically coupled to one another. If they were mechanically coupled, the experienced pilot could have felt the other pilot pulling back on the controls, but what was happening was that the two pilots were pulling the controls in different directions AND the plane was averaging the control inputs and giving no feedback to the pilots that what each was doing was wildly inconsistent or contradictory.
>The copilot pulling the yoke back continued to do so, long after the other, much more experienced, copilot had formally assumed control and had attempted to bring the nose back down by pushing the yoke forward.
At least according to the official accident report, neither of the pilots at the controls consistently made nose down stick inputs.
Basic rudimentary 'stick and rudder' flying skills was a big factor in AF447's crash. All old school pilots know that when you aircraft is in a nose high stall condition, you never keep pulling back on the stick, but instead push it forwards to lower the nose and get the wings flying again.
The fact that the co-pilot in question kept holding the stick to the back stops was the main reason that the aircraft wallowed into the sea. Weirdly, he did let go of the stick for a brief few seconds, which was the only time during the harrowing descent that the aircraft started to behave normally, but then he pulled it back and held it back right up until impact.
Yep, the aircraft could have ignored these inputs, but the inputs are counter to what any reasonably skilled pilot would have done. (Note: Different to the MAX crashes where pulling back on the stick under speed IS the accepted way to stop a descent.)
Part of the issue may have been that the plane had slowed down so much that the stall warning stopped (it disengages below a certain airspeed apparently). When he stopped pulling up, the plane sped up and the stall warning started again. Pull up again, plane slows down, stall warning stops.
I wonder if something about this system was changed after that incident - why not keep sounding the stall alarm if the plane ends up outside the flight/sensor envelope? Can’t you assume that it didn’t magically cross the stall zone back into normal flight?
Basic rudimentary 'stick and rudder' flying skills was a big factor in AF447's crash. All old school pilots know that when you aircraft is in a nose high stall condition, you never keep pulling back on the stick, but instead push it forwards to lower the nose and get the wings flying again.
Except on an Airbus. If the plane is in "normal law", it won't go into a stall condition. Here's the Airbus training video.[1] Note, by the way, that the automatic recovery includes going to full throttle. The throttle levers don't move, though. Unlike Boeing, where the levers are moved by the computers and the pilot can overpower that. In the 737 Max, though, it's worse, because the engines are mounted too high and full thrust pushes the nose down. So "full power and back off on the stick" will not work.
>The fact that the co-pilot in question kept holding the stick to the back stops was the main reason that the aircraft wallowed into the sea. Weirdly, he did let go of the stick for a brief few seconds, which was the only time during the harrowing descent that the aircraft started to behave normally, but then he pulled it back and held it back right up until impact.
This description isn't consistent with what's in the accident report. Where are you sourcing it from?
AF 447 wasn’t all that different from this situation. One of the co-pilots was trying to pitch the nose down to recover from the stall. The other was panicking and trying to pitch up. The plane averaged their inputs, without giving feedback via the stick that this was happening. It wasn’t until very late in the flight that they figured out what was happening, and then it was too late to recover.
Obviously there was some significant pilot error in this case, but a big contributor mag have been that the pilot who was trying to correct the stall didn’t understand that the plane was ignoring his input because of the averaging.
In April 2012 in The Daily Telegraph, British journalist Nick Ross published a comparison of Airbus and Boeing flight controls; unlike the control yoke used on Boeing flight decks, the Airbus side stick controls give little visual feedback and no sensory or tactile feedback to the second pilot.
Ross reasoned that this might in part explain why the pilot flying's fatal nose-up inputs were not countermanded by his two colleagues.
In a July 2012 CBS report, Sullenberger suggested the design of the Airbus cockpit might have been a factor in the accident. The flight controls are not mechanically linked between the two pilot seats, and Robert, the left-seat pilot who believed he had taken over control of the aircraft, was not aware that Bonin continued to hold the stick back, which overrode Robert's own control.
That suggest there was only ever one pilot flying and the way that pilot reacted to the situation had a big part to play in the final crash.
> That suggest there was only ever one pilot flying
"Pilot flying" is a human-factors title, not a software function-lock. It just indicates who has control responsibility at that moment but it is not enforced by technical means.
It is intended to eliminate ambiguity in crew functions; the PF can be a newbie copilot even if the commander of the aircraft is a 30-year-service Captain who would become the PNF at that point. Its all part of Crew Resource Management theory.
There should only be one PF in a cockpit at any one time, precisely to avoid the situation that arose with the Air France flight where the computer was receiving inputs from two pilots.
I was responding to the claim the flight control was averaging the two pilot inputs, because if that was the case then two pilots would have been flying the plane.
Might point was I doubt that this was in fact happening and there was only ever one pilot in charge.
> the Air France flight where the computer was receiving inputs from two pilots.
The link and quotes I posted suggest that was not happening.
The system was just ignoring the other pilot (and that was the designed fault) because it also failed to tell that other pilot he was being ignored.
Thanks for the link. It is a very interesting read.
In particular it also says this:
To avoid both signals being added by the system, a priority P/B is provided on each stick. By pressing this button, a pilot may cancel the inputs of the other pilot.
Yes, indeed, I have not found any reliable source for the claim that both pilots were making significant stick inputs simultaneously for any extended period of time.
You may be right about the averaging. From rereading the accident report, the Pilot Flying took back control of the plane after the Pilot Not Flying engaged his controls and tried to pitch down.
But, it’s the same basic idea. The PNF thought he’d gotten control of the plane, and didn’t understand why his input wasn’t having an effect. He didn’t get feedback from the stick telling him a different input was being honored. And neither pilot appears to have been fully aware that they were in a flight control mode where there was a risk of stalling. The PF especially never seemed to have made that connection, and the PNF took a fairly long time to call it out. As a result, the PF may not have been aware that he needed to actively keep the angle of attack inside the flight envelope.
So, PNF tries to pitch down, but isn’t aware the plane got put back into a mode where he isn’t in control. PF is pitching up, but isn’t aware the plane switched to a mode where this could lead to a stall. That’s the similarity I was getting at.
From the reported control traces, there was no prolonged period of dual input. There were 3 or so brief moments of dual control input (1 - 2 seconds), during which a warning was sounded. The pilots never spoke out loud about it, but we can infer that they heard the dual input warning and were aware when it happened because the sequence of events was the same each time; inputs from both joysticks received -> aural dual input warning -> input from one joystick stops.
Something about the idea of two pilots inadvertently fighting each other for control of the aircraft has definitely caught peoples’ imagination. But it didn’t happen.
The incorrect user input on AF447 happened AFTER all of the automatic systems had failed due to sensor clogging. How could ignoring user input have helped the flight when the plane's computer giving up was the cause of the manual takeover in the first place?
Yes, I guess that's where I say I don't know of a fool proof system, and then yeah how would it know it was incorrect input. I was simply saying incorrect input, given the actual situation, was an issue.
I'd say the improper input was the most direct factor, as it was responsible for the stall condition all the way into the ocean.
But there are multiple major factors leading up to that, including the lack of high altitude training in direct law, and that the simulator didn't exactly simulate high altitude stalls, and that the stall warning stopped when the angle of attack was beyond the sensor limit. All of these things are major and the final report really sank a lot of blame on Airbus and Air France as well as pilot startle effect basically stopping their brains from working the problem. The senior pilot who arrived didn't have that, and quickly figured out the source of the problem but by then it was too late, not enough altitude to recover.
After so many decades, there's no golden rule here ? (genuine question).
A principle of zero automation fallback in case of confusion ? something that is hardcoded deep in the design so that people in charge (pilot crew) can know for sure that whatever happens is in their hands ?
Of all the many problems of the Boeing 737Max situation, and there are several, for once I don't think the media reporting is one of the biggest. But, your basic point stands.
The root problem is the culture at Boeing and the FAA has shifted from safety first to profit first.
The investigative reporting from The Seattle Times[0] indicates that safety engineers were pressured to avoid delays to rush out a competitor to the A320. Furthermore, their safety analysis was based on flawed assumptions to meet an artificial constraint of not requiring pilot simulator training in order to appease the airlines they were selling to. Finally, the FAA is allowing industry to self-certify critical systems with lax oversight.
It is easy to get lost in the technical details of why a particular catastrophe happens. The common throughline is a broken culture where deviance is normalized and those who speak out are ignored. It's the same story with Chernobyl, Fukushima, the El Faro, the USS Fitzgerald and USS John S. McCain, Air France 447, and now the 737 Max.
Fukushima? That doesn't belong on the list. There is some limit to any engineering decision. Complaining about MCAS is totally reasonable, but it would be unreasonable to argue "The Air Max is not safe because if I hit it with enough Stingray missiles it won't fly anymore." Like, yeah? No kidding?
Fukushima was designed to survive the earthquake, and it did, it just wasn't designed to survive the earthquake and also the tsunami.
Fukushima survived the earthquake and even survived the tsunami. The generator got wiped out, but even that wasn't what ultimately led to the disaster. It was that the battery backup eventually ran out of power (not unexpected) and the connectors for recharging it were old and of a format that isn't used any more. There was no way of recharging the battery backup and so the pumps eventually failed.
It's one of those problems where there are literally a million things that could go wrong and since the emergency system is not used normally, it's easy to overlook a critical problem.
So I agree with you. Fukushima was not a design error -- or at least not a design error that could have been reasonably fixed at the time that the reactor was originally designed. It was an error in maintenance. Obviously better to have a design where loss of power doesn't cause a melt down, but I don't think that these were available when Fukushima was built. CANDU reactors existed at that time, but I think they were still considered experimental. Pickering came online in 1971, so basically at the same time as Fukushima. I'm not familiar with other passive designs, so possibly someone else can make an observation.
But basically, as far as I can tell, Fukushima was a reasonably normal nuclear power plant for the time it was designed. The Air Max seems to have suffered from problems because of design decisions that are not considered normal.
Totally agree. Done it myself more times than I care to admit. One small quibble, if I may. Originally "antipattern" used to mean something that looks like a good design pattern, but will actually bite you in the end if you used it as intended. This is not so much an anti-pattern as it is an unfortunate reality (you have to maintain compatibility with external interfaces for the length of the project). How much bit rot have I seen in my career?
I was thinking of set-it-and-forget-it backup systems as the antipattern, as opposed to e.g. designs that regularly force the "backup" system into active use under controlled circumstances. The battery connector represents sort of a backup of the backup though, so it may not be a good example of what I was thinking of.
But it could have been quite easily by simply siting the backup generators above ground. That was a stupid design error. Tsunamis are not unknown in Japan after all.
I refrain from using “simply” or “just” unless I am the person expected to design or fix the problem. Ahead of time. Saying after a disaster caused by the most powerful earthquake ever recorded in Japan[0] that the solution was “simply” to do some coincidentally simple-sounding thing is not credible.
Yeah, this is something that I think doesn't really resonate with people well. The reactor site is 25 meters above sea level. I'm not exactly sure how high the generators were, but they were well above the level that experts thought was safe at the time. The earthquake was a 1 in 1000 year event and so there was no data on record to help them model the resultant tsunami. In the years following the tsunami, the way people modelled waves radically changed based on the new data.
There are a couple of caveats. First, there were markers saying that an historic tsunami had come in much higher than models would have predicted. However, the are very old. It's just a rock stuck in the ground with some writing on it. Stuff like that is all over Japan (there are lots of markers around where I live -- I don't think anybody pays any attention to them at all. Probably we should, but usually they just mark boring stuff ;-) ). It's like seeing a roman road marker in Europe. Interesting, but not really note worthy. It's only after the tsunami that people saw the markers and said, "Holy cow. There's a marker here showing that a tsunami came up this far". Even then it's a far cry from seeing that to saying that we need to invalidate all our wave building models.
Secondly, I think there is some evidence that in a few years preceding the tsunami that researchers were getting worried that their wave models were not correct. I think it's even the case that nuclear plant companies were aware of this. When I first moved to Japan in 2007, there used to be a section of the Meteorological Agency of Japan that showed, among other things, a map of the farthest in a tsunami would theoretically go for all parts of Japan. It also listed maximum wave size for every single place along the coast. It noted places where sea walls were not high enough and estimated worst case damages and numbers of casualties. Around about 2009 it disappeared. I tried to find out where it went and the response I got was that it needed to be updated and that it would return at some point in the future. Of course, it never came back. At the same time, I've heard that literally a few years before the Tohoku earthquake that there was serious debate about whether or not the wave models were correct. However, I think it's pretty clear that in 1967 when they started construction at Fukushima they had absolutely no idea that they were building in a potentially unsafe area.
It really sucks and I think it's fair to say that as humans we probably have too much hubris when it comes to our science. That fact that you have no reasonable way of knowing that you are making a mistake doesn't mitigate the problems that result from that mistake.
Agreed. I think the background you shared shows one of the big problems in design: it's always easy to see the tsunami markers after the fact and say "the information was there all along". It's a lot harder to pick them out from a thousand information sources that are, a priori, just as compelling. Looking for those markers before design would have required looking at every bit of evidence at least as compelling as those markers, which would likely be cost-prohibitive.
It's actually kind of interesting. After the disaster, the nuclear power plant near me moved it's backup generators literally on the top of a neighbouring hill -- it's about 100 meters above the entire complex. And they made another backup one on another hill. Once you know the problem, it's not hard to fix it. I admit to being a bit worried that now the generators are too far away from the power plant so that when an earthquake happens (we're 50 years overdue for a regular terrible earthquake) it will be disconnected. The irony will be lost if it happens, I'm sure...
> Fukushima was designed to survive the earthquake, and it did
untrue
It was designed to survive both a tsunami and an earthquake. Tsunamis often are caused by earthquakes.
That Fukushima survived the Earthquake is a myth. The plants had an emergency shutdown and there was very little time for a damage assessment, which would have taken weeks or months.
Whether the plant would ever have been restarted after the earthquake is unknown. It could have been a full loss, like several reactors in Japan, which will never be restarted.
The plant lost electrical connection to the grid, of course it had an emergency shutdown. Otherwise they'd have had to have found some other method of dissipating megawatts of electricity.
The fact that other plants have not been restarted is at least as likely to be political as it is technical.
> The plant lost electrical connection to the grid, of course it had an emergency shutdown. Otherwise they'd have had to have found some other method of dissipating megawatts of electricity.
A nuclear power plant always has an immediate shutdown in case of a strong earthquake:
'Japanese nuclear power plants are designed to withstand specified earthquake intensities evident in ground motion (Ss), in Gal units. The plants are fitted with seismic detectors. If these register ground motions of a set level (formerly 90% of S1, but at Fukushima only 135 Gal), systems will be activated to automatically bring the plant to an immediate safe shutdown.'
'Tepco's announcement yesterday included a section dedicated to the effects of the magnitude 6.8 Niigata-Chuetsu-Oki earthquake, which violently shook the Kashiwazaki Kariwa nuclear power plant on 16 July 2007. All seven of the reactors remained safe during the event, which caused huge damage to the region and several deaths. However, checks to establish the units' safety to return to service are proving very lengthy, and could continue into the latter part of 2008.
The ongoing inspections at Kashiwazaki Kariwa are to cost ¥122 billion ($1.13 billion) in FY2007. In addition, ¥25 billion ($233 million) will go on civil engineering repairs while a geological survey of the site is to cost a total of ¥2 billion ($18 million).'
Just the inspections after a safe shutdown for that nuclear power plant did cost more than 1 billion USD...
Though one might argue that the risk of tsunami is not independent of the risk of (certain kinds of) earthquake for pacific rim nations. Failure to take that into account might be considered a design decision.
> The root problem is the culture at Boeing and the FAA has shifted from safety first to profit first.
So the same problem that pervades society everywhere now? I’m not sure if that wasn’t the case before, but it feels to me that people previously wanted to make lots of money by building great products, and they’ve just left the ‘building great products’ part behind.
There are still companies that do that; the ones I'm aware off are mostly from Germany and Japan. Like assembly line robots, but also Panasonic (and maybe Fujitsu; haven't tried them for a while, but I used to be a big fan of their 2-in-1's P1510 rang) laptops (especially the Japan-only ones). They would not sell anywhere else because they are crazy priced, but they are virtually indestructible and go on forever.
Despite the 737 Max fiasco, airplanes today are far safer than ever before. Since 1970, annual deaths have been cut by >80%, while air traffic has increased by a factor of 10.
Cars have seen similar improvements. So have food hygiene, workplace safety, and most any measurable safety record I can think of.
You are using statistics wrong here, FAA process changed , Boeing entered a panic mode so their process changed too, applying statistics from the past where processes were different(FAA did its job and Boeing was no cutting corners) is incorrect.
We need new statistics, and this are the stats for the recent Boeing plane where the MAX crashed 2 times, first crash was blamed on the pilots and no serious urgent investigation was performed, the MCAS issues that were discovered were trivialized, still Beoing is trying to shift t6he blame on a bad software and not on the actual causes .
My point is that you can't use statistics that way, there are rules on how to apply them correctly and many pitfalls when applying statistics in real world scenarios.
No, I'm using statistics exactly as intended. Last year was much safer than any year in the 20th century, and so will this year be.
You have some ideas about how everything used to be better in the past, and you're trying to hold onto them in the face of overwhelming evidence to the contrary.
There isn't some sudden increase in the pressure to earn money that wasn't there in, say, 2008. And while the 737 Max process was obviously flawed, the argument above was that somehow there are fundamental problems across companies and industries, not just a single model. Quote from above:
> So the same problem that pervades society everywhere now? [I]t feels to me that people just left the ‘building great products’ part behind.
While evidence of a breakdown in Boeing's ability to design safe planes would indeed lag, there are many other critical processes where regressions would show rather quickly, such as maintenance, fuel quality, air traffic control, IT security, etc.
Maybe you want to state it more clearly, like in math so I can have a better idea what you are talking about. I think I missunderstood you anyway, because I was thinking at airplane crashes statistics and not at the entire world industry.
In statistics you need some basic things to be true, like you need true random independent events, or sometimes a theorem apply only if the distribution of the events is known to have some properties.
What I understood you were implying (I was wrong, sorry) and other comments too, is that if we look at Boeing crashes (not MAX) for latest N years (but we exclude older incidents) then we can conclude things about the future statistics of the MAX.
About the rest of the industry I agree that most things got better. But for example in car industry there was a lot of competition and a lot of regulation bout safety and pollution that forced the companies to be better, if you had only 2 car making companies and have them self approve then we may get in the same situation as with the MAX.
The problem was that because it was an unknown component its failure mode was not known, which created/exacerbated the panic.
There is a simple procedure, which is already part of the standard memory checklists. What to do in case of runaway trim. The pilots must/should be able to notice the trim wheels spinning, they then can disable trim motors and fall back to manually cranking them.
The problem is, panic makes a mess of almost anybody. Sure, pilots shouldn't be anybody, but we know how much cost cutting has been going on.
Well also the fact that the stabilizer can get trimmed far enough that the aerodynamic load on the rear stabilizer exceeds the pilots ability to move it manually. In that case the only real solution early in take off would be to re-enable the trim system so you could use the trim motors. The other solution to that situation is to nose down to release some of the pressure on the stabilizer so they can be manually cranked again, this isn't really an option early in the flight though because you don't have the altitude to slowly undo the erroneous trim while diving.
I wasn't aware, and seemed quite ridiculous that there was not enough torque to rotate the screw, but looking at it, the angel of the helical threading is pretty steep.
And it turns out this can/happens even with the electronic motorized way, and there is a maneuver to work around the load. But it got removed from the manual...
When the Boeing CEO had his press conference in April, he said "we've confirmed that the MCAS system as originally designed did meet our design and safety analysis criteria and our certification criteria."[1] I yelled at the screen "you know that means the criteria are flawed, right? If the procedures didn't catch this mistake, that means there are other mistakes that weren't caught!"
A flabbergasted reporter then asks him if he means to say that MCAS was designed to push the nose down 21 times. The CEO then blamed the pilots for not following procedures!
He repeatedly avoids avoiding admitting that any mistakes were made. He was so intent on avoiding blame that he gave me no confidence that Boeing could learn from its mistakes. The longer he talked, the less confidence I had in Boeing as a company.
Edit: unsure if this video is correct, but the MCAS system is classical "solution" to how in software a junior programmer would patch their software by adding more complexity to the code without realizing what that changes would further affect. It felt like they gave up on some solid solution and just decided to added a sensor who would overwrite pilot manuver.
A similar situation would be downgrading a size of tires on a new model of a car (to be competitve with other auto makers that lower the price) and simply adding software blockage that you cannot turn wheels as violent as you could before because on thin thread that would lead to a car tipover.
The whole video - if correct, paints a grim picture of profits above PAXes...
Admitting culpability more or less admits financial responsibility as well.
If Boeing would say "we're at fault", people would sue them (more than now) and have greater chance at winning those lawsuits, since Boeing already admitted to being at fault.
I can imagine Boeing, and the Boeing CEO, is trying very, very hard at walking a fine line of not admitting being at fault, but also saying that they are "taking responsibility"
(since they have 300 souls, public opinion, and lawmakers against them). They have to convince people (those that matter, ie those stopping operation and sales) that the MAX and Boeing are safe.
Can you imagine killing 300+ people and then having to talk in such a way to accomplish that down the line the least amount of money will go to the families of the victims? This man has really has no soul.
I wonder if they're just now finding new things by simply running the software in a simulator, what more will they find, and what does this say to their continued assurances as to the safety of this plane?
I think there are several factors at play. First of all, the 737 MAX is getting scrutinized extremely thoroughly now. Whatever went wrong with the original certification, authorities are looking with extra attention so they are not missing anything this time.
Then I think, that as a consequence of the analysis of the two crashes, in general a better understanding of certain flight states has been reached. In the light of this better understanding, other bugs might be discovered which previously have been missed.
On the flip side, all airplanes have flaws to some degree or else they would never crash. A deep audit of everything in the end should make things safer. And Boeing also has a solid safety record on their other models 777 and 767
Boeing also has a solid safety record on their other models 777 and 767 and an incredibly poor safety record for their 737-MAX model.
The problem for Boeing is the history of flying has many examples of planes with poor safety records being rejected by the flying public.
Only time will tell if the 737-MAX joins that illustrious group, but the more the plane stays in the news for all the wrong reasons, the better are it's chances.
I mean. I am going by wikipedia here but most of the things you've said in this subthread seem to be inaccurate. It wasn't the original 737. It wasn't the NG. And most of the accidents you've listed were not caused by that particular rudder design problem (as far as I can tell, 3 were, maybe?). You keep adding new inaccurate things without the slightest acknowledgement you might have misremembered or misstated something before.
The five minutes seem to have been inadequate because a bunch of these explicitly say they were not conclusively attributed to that specific problem.
Copa Airlines Flight 201
"but after an exhaustive and extensive inquiry, they concluded that the crash was caused by faulty attitude indicator instrument readings."
China Southern Airlines Flight 3943
"CAAC blame the pilots for improper response to an autothrottle malfunction. "
Silkair Flight 185
"The NTSB's report found that there was sufficient evidence to rule out mechanical failure (based on examinations of the suspected PCU/dual-servo unit recovered from the SilkAir crash), and that the probable cause of the accident was "intentional pilot action" by a pilot"
It's a shame so many people have to die for them to "sort their shit out". Maybe they shouldn't make idiotic design decisions and lie to regulators in the first place?
Flying is not exactly easy and imperfect beings can't build perfect machines and even if they could they wouldn't be able to test it because they are imperfect. In 1000 years there will still be aircraft accidents.
I would say times have changed at Boeing, because it seems for the MAX, rather than sorting their shit out they instead seem to prefer covering up their design faults.
They also seem to have a bad habit of blaming the dead pilots for not being able to fly they broken planes.
The flaw was allowing a single pilot in the cockpit and allowing the co-pilot to get locked out. But yes I do agree that in theory its possible to have a perfect aircraft and still crash due to some act of God. Nevertheless the context was with respect to making an aircraft with known flaws safer
You could hardly describe that as a flaw when the alternative was 9/11. It's a trade off on balance of probabilities. A suicidal copilot can still to this day crash any aircraft with only slightly more creativity. For example hitting the fuel cutoffs at 500ft after takeoff.
The alternative isn't a repeat of 9/11. You're assuming that passengers aren't a whole lot more motivated to overpower hijackers after 9/11 than before. One reason 9/11 could happen is that passengers generally assumed a hijacking was mostly a scary inconvenience, not a life-or-death situation.
Exactly; another 9/11 situation is impossible now. They just need to keep doing screenings of passengers and their carry-ons to make sure no one can bring obvious weapons on board; anything else is unnecessary and possibly counterproductive. If someone tries to hijack the plane using razor blades or whatever, the passengers will literally murder them.
It still doesn't hurt to keep them out of the cockpit, though. Quickly stabbing the pilot to death is likely to at least kill or injure a plane's worth of passengers.
True, but what if the pilot is having an emergency, or the copilot has decided to commit suicide while the pilot went to the bathroom? There should be a way of getting the door open quickly if enough people authorize it (perhaps at least 2 crewmembers).
This plane appears to be a (sorta) flying dumpster fire rushed to market against Airbus with tragic consequences. One design flaw after another is MAX failure.
The industry is pretty homogeneous and somewhat of a revolving door between the few OEMs. What are the odds that the code on this plane is orders of magnitude worse (in terms of code quality) than every other similar bit of code in the sky?
Edit. I should have to say this but I'm not defending Boeing here. I'm saying it's not at al unforeseeable that the other planes, even ones by other manufacturers are just as bad. Clearly they thought nothing was abnormal about this one until it started falling out of the sky. I see this kind of like diesel-gate. If one of them is cheating they're probably all cheating.
It's worse than other such planes. For instance, the accident on Qantas Flight 72 is superficially similar - faulty data caused the Airbus A330's flight computer to ignore pilot input and abruptly pitch down, injuring a number of passengers - but it really wasn't. The Airbus engineers knew that angle-of-attack sensors fail, so unlike Boeing they designed their system to compare the input from multiple AOA sensors and ignore bogus data.
Unfortunately, that comparison algorithm had a flaw - it got confused when it received spikes of invalid data with a certain timing pattern and erroniously used the invalid data. That pattern shouldn't have been possible. No-one has been able to figure out any possible cause for it even in retrospect, and they certainly didn't anticipate it. However, the engineers designing the system did realise that the flight computers could have subtle bugs triggered by specific data timing - so not only did every flight computer have a monitoring channel running independently-written code checking its calculations, that monitoring channel was intentionally not synchronized with the main channel or any other flight computers. This meant every time one of the flight computers acted on bogus data and forcibly pitched down, the monitoring channel calculated values so different that the fault detection disabled its ability to do so within a few seconds.
The maximum allowed authority and the altitude at which the system was enabled were also much more carefully restricted than MCAS, so it couldn't take such erroneous actions in situtations where the pilot might be unable to recover. Combine the two safeguards, and something like Qantas Flight 72 with a few passenger injuries but no crash was pretty close to the worst-case scenario that could be caused by this weird and incredibly unlucky issue.
Huh, this is new to me. I thought it was simply that two sensors failed, and their failed values outvoted the correct value in the quorum vote with the third sensor. What's the mystery about?
A longer landing gear to get the plane higher off the ground doesn't fit into the existing gear well in the wings. So you would have to redesign the wings. This required making sure that the loading on the main body stays the same. By now you are doing more than 50% the design work of a new airplane and you definitely can not keep the type rating. And if you need a new type rating anyway, you would redesign the cockpit, probably to be similar to the glass cockpits in the 777s.
They could, but my understanding is then it would count as a new airplane, with more training required for pilots to fly it. They were trying to pretend it was not a completely new airplane, so that pilots wouldn't need much new training to be certified to fly it.
It is turning out to have been a pretty bad decision, even from a mercenary point of view.
The engines on the original 737 design were quite small by modern standards, and the wings were low as well. As such there aren't really a ton of places to put new, larger engines. They basically have to be placed further forward and higher up. This means the pitch moment created by throttling up/down is different on different versions of the 737, which is what required the MCAS device in the first place - they were trying to make sure the 737 MAX flew the same as previous models.
Ah, you mean why did they originally do it? Sorry, I misunderstood. The original reason was that the 737 was designed that way so that it could be boarded with just stairs instead of a skybridge: https://simpleflying.com/737s-low-to-the-ground/
This was super helpful to small carriers at the time, but as time has gone on it's caused problems.
Because they would have to redesign the parts of the fuselage that holds the landing gear when they are retracted - as far as I know, there's not enough room to house taller landing gear.
Boeing did actually design extendable landing gear for the 737 MAX-10 model, which hadn't shipped yet. That aircraft is longer. To avoid tail strikes, the rear gear becomes longer as the aircraft is taking off.
I'm sure that costs more money. If it were on all 737 MAX aircraft, perhaps the engines could be lower. It'd be like a kneeling bus that gets lower when stopped to take on passengers.
Another option would be to make the engines move. They could be folded up or slid up-forward when driving around the airport. The FAA might have issues with this though. I don't know of any aircraft that does transformer stuff while zooming down the runway.
True, there are planes that compress the struts before retracting the gear for the gear to fit, so perhaps something like this could be designed for the 737 too? But that's increased complexity on a non-redundant part that you definitely don't want to fail, so it's not obvious how the trade comes out...
You're talking about changes that would cause the plane to be a very different plane than the 1967 737, and would then need a new type rating. If you're going to do that, you might as well just throw out the 737 design altogether and make an all-new clean-sheet design. Boeing wanted to avoid that because that costs a lot more money and pilots would need retraining on the new type. They wanted to be able to call this piece of junk a "737" and claim that pilots didn't need any new training to fly it.
They could, but it would mean that a lot of the supporting infrastructure at airports would have to change, essentially negating much of the advantage from avoiding a new clean-sheet design.
As long as it remains sufficiently 737-shaped airlines can keep flying it to all the place they've been flying 737s for decades.
What supporting infrastructure would have to change? Jetways are compatible across a wide variety of planes. Yes, you can't use the same jetway on an A380 as a 737 (the A380 is just way too big), but you certainly can use the same jetway on a 737 as on an A320, and many other planes besides.
The reason they didn't want to do a clean sheet redesign of the 737 was to avoid pilot recertification, not because of ground infrastructure. Airports are already used to servicing dozens of different types of planes. Adding one more to the mix wouldn't materially change anything.
You sure? The A380 is so much larger (especially in width) that it can't fit into the narrower gate spacing typically seen for smaller planes. Sure, maybe one in isolation could service an A380, but ten in a row in typical airport spacing can't service ten A380s.
Separately, there's the issue that the A380 is so large that you need multiple jetways (at least one for each level) to efficiently load and unload everyone. Maybe you could slowly unload an A380 by debarking everyone off the lower level and forcing everyone on the upper level to walk down the plane's internal staircase, but this is extremely sub-optimal. In practice airlines don't do this; they use specialized gates set up for the A380's special needs.
That doesn't mean it can't use the same _jetway_ (we were discussing jetways, not gate spacing) you'd use on a 737 or that a 737 can't park at a stand wide enough to fit an A380.
The constraint is the FAA would see it as a new aircraft and therefore have to certify a new airframe. The very in-depth review the FAA is currently doing on the MAX was the expensive (time and money) Boeing sought to avoid.
My instinct is that you're probably correct, and most planes are probably riddled with similar problems and would look bad if they came under the same scrutiny as the 737 Max is currently under. For example, I remember after the 787 was launched there was a few stories in the press about the batteries catching fire mid-flight, but it turned out the designers and engineers had accounted for that possibility and it wasn't really a big deal. They've improved the batteries, but it didn't actually hurt anybody.
However, the problem with the max seems to be that instead of designing for failure - assuming that critical systems might fail and the plane should recover gracefully from the failure of most internal systems - they seem to have designed for success, and made the assumption that no part of the plane will ever fail. And that's obviously not realistic, and we're seeing the consequences of it here.
I remember after the 787 was launched there was a few stories in the press about the batteries catching fire mid-flight, but it turned out the designers and engineers had accounted for that possibility and it wasn't really a big deal.
It was a huge deal and the plane was grounded for three month due to those battery fires.
Originally the engineers did not account for battery fires and needed to refit a compartment to seal the batteries in case a fire occurs.
Luckily it was either on approach, or they could emergency land the plane when the problem occured.
A burning lithium battery in a flying plane is one of the worst imaginable scenarios you can encounter during a flight and is a fucking big deal.
> My instinct is that you're probably correct, and most planes are probably riddled with similar problems and would look bad if they came under the same scrutiny as the 737 Max is currently under.
0.5% of all 737 MAXes which exist have killed everyone on board. No other mass market production airliner in recent history has that sort of record, AFAIK.
The rate of fatal crashes is more than an order of magnitude higher than anything introduced in the past 30 years.
The Concorde is the only thing that's worse. The A310 is closer than anything else at 1.35 fatal crashes per million miles vs the 737 MAX's 3.08- and that one was also designed to share a common type rating with an existing design.
> My instinct is that you're probably correct, and most planes are probably riddled with similar problems and would look bad if they came under the same scrutiny as the 737 Max is currently under.
The 737 MAX only looks bad because it's track record so far is that it's a death trap.
Since this thread is already full of armchair airplane engineers opinions about the root cause, I'll add mine too: Never ever write new software under the condition that it must work EXACTLY as old software. It can't be done and you'll always miss something.
Case in point: Wine. They've been at it for years and the emulation layer is still far from perfect. It works 99% of the time which is good enough for games, but I wager not for airplane control systems.
Wine is not a good example. They don't have access to the source code of the original system, and they haven't had the resources to cover the entire footprint of the Windows APIs.
Also, if you rebuild a system in isolation, you have to reproduce all the bugs - since you do not know which of the bugs are load-bearing, i.e. other systems have been engineered to depend on the bugs, whether by accident or intentionally.
In fact, divergence from MATLAB is considered a bug. Admittedly, the entire point there is a free and open source replication of MATLAB, and is only really necessitated by licensing.
No, the Octave faq reads: "There are still a number of differences between Octave and Matlab, however in general differences between the two are considered as bugs. Octave might consider that the bug is in Matlab and do nothing about it, but generally functionality is almost identical." Octave doesn't (for good reason!) re-implement bugs found in Matlab meaning it doesn't and never will work exactly like Matlab.
Boeing's engineers weren't tasked with making MCAS generally work like a 737. They were tasked with making MCAS work exactly like a 30+ year old aircraft model, complete with quirks and faults. An impossible task.
Alright, I think I see the point you're going for now. I think I misinterpreted what you were saying before.
You're saying they set out to make not-a-737 behave physically as a 737. Yes. I concur. I seem to have hyperfocusef on the replication part and not enough on the plane part.
I made a post about three months ago (https://news.ycombinator.com/item?id=19578201) that my main fear with the 737 Max was that they rushed the plane into production and more mistakes were made than with just the MCAS system. Unfortunately, that seems to be the case. I doubt Boeing will ever convince me that enough "patches" have been made to make the plane safe to fly on.
Why bother given how it easy it is just to chose another itinerary with a different type of plane? It's not like there's only one itinerary/plane type per route or anything.
I'd be pretty comfortable flying it after all this attention and review. It will probably be the best reviewed passenger plane software developed in America, if not the world once this is over.
Boeing deserves a 9-figure fine though, and its shareholders should lose massively to make sure this doesn't happen again.
I'm not convinced. The pressure on Boeing to fix this ASAP is immense. That is not a good environment for writing safety critical software. Especially if they are doing a "broader software redesign". I don't believe that software quality can be enforced from the outside.
Interesting tidbit in the video. At 1:43 you see a MAX in Jet Airways livery - an airline that ceased operations and terminated all flights about 1 month after the grounding began.
For anyone writing software controlling machines it is pretty much the status quo. It has to be darn near perfect, updating it later if it is even possible will be expensive and inconvenient
As much as it is a shitty environment if you have 6 months to fix it and all of the company resources you can think of to ask for that is lots of time.
> It will probably be the best reviewed passenger plane software developed in America, if not the world once this is over.
The problem is that this is not actually a software problem. It’s an airplane design problem, and Boeing is trying to convince you that it’s just the software.
Even if the software is perfect, this plane remains a flying coffin until it is redesigned from scratch.
It's a culture problem. You need to fix the culture to fix the root causes of all of this. And listening to the CEO (who is the culture) doesn't seem they want to fix it.
When doing root cause analysis there is a pyramid with people problems at the top, then deeper technical problems, process problems, culture problems and value problems.
Most root cause analysis stops with people problems, or technical problems while all the root cause analysis I've done never showed that problems end there. Culture and value have often been the underlying causes.
Yes, everyone focuses on the software here. They assume that MCAS just needs a few updates and it will be all good.
How can we trust that assessment? What if the plane is inherently unsafe? There's been no critical 3rd party review of the plane without MCAS in operation. Everything is a Boeing talking point. Their proposed fix is 2 AoA sensors (on top of whatever slapped-together software updates), and if they disagree, disable MCAS. That's going to decrease the MTBF of that system. So, IMO, the real question is, why should MCAS even be allowed if it's so easily disabled? Either the planes can fly without it or they can't.
I bet they will be able to test the known or anticipated issues. What about the unknowns that bad hardware design introduced? Thats why people are scared of.
The aerodynamic/stability effects of moving the engine position is a complicated topic, and I haven't seen a really good thorough explanation of it yet that really goes into the gritty details. There's so many rules of thumb and generalizations in the vast majority of peoples' understanding of aircraft stability and forces, even for pilots or engineers in the field.
What's the problem with the airframe? What's the problem with the larger engine nacelle causing a different pitching moment? I've yet to read either of those violate any portion of FAR 25. There is still an open question whether the MAX should have or will have a separate type certificate from other 737's, thereby requiring pilots of a type rating to fly the MAX, thereby requiring full disclosure and training on all differences.
Which pilots should have had anyway, even if it didn't require a new type rating.
The problem with the airframe is that the shape of the nacelles, and their forward position create extra lifting surface in front of the center of gravity at high AoA.
What this results in, is a non-compliant stick force response curve at high AoA. This violates FAA regulations which state that the stick force required to bring a plane to a stall must consistently increase all the way to the stall. No sickening or decreases in apparent required stick force are certifiable as airworthy for Civil Transport Aircraft.
This was explained in a recent Seattle Times Article. I also posted it here as a theory back in March abouts.
[3]relevant FAR 25.173, see stick force requirements
MCAS was designed to artificially induce a mistrim which has the end result of smoothing out that force curve. Initially it was only for high-speed, high-G, but was eventually loosened to accommodate the same behavior happening at a low speed low G part of the flight envelope.
This is a hack in the truest sense of the word, because you'll notice the regs indicate the plane having to be within a certain point of appropriate trim; except in those flight regimes, the trim is artificially massaged from what the pilot set it to, to what MCAS thinks it should be, which is determined by the input of only a single AoA sensor.
In short, the airframe is aerodynamically flawed in some of the most fundamental ways known to the aviation community. It can work; given a correct MCAS system. There was no excuse, however, to hide this from both regulators and pilots.
I appreciate the response but it does not help at all one bit, it's a rehash of everything I've already read. In all of these 737 MAX discussions I'm one of the first to have brought up FAR 25.173(c) (although that whole section is relevant).
I do not see how software augmentation can be permitted to compensate for a deviation from FAR 26.173. That section is titled: Static longitudinal stability. Software augmentation cannot cause an airplane that lacks positive static stability to have positive static stability.
You have to understand that what you're proposing represents such a magnificent perversion of FARs, if true, that it is not conceivable to me that this is not a big f'n deal in the aviation community. It would be the elephant in the room, that somehow Boeing and the FAA permitted an airplane that in fact does not have certifiable positive static stability in one or more axis, to have been certified because of a software routine; let alone a software routine with no redundancy and is easily disabled in flight with no documentation on the consequences.
Let me put another fine point on it: I do not care if there are 50 AoA sensors and 100 independent computers on this airplane to compensate for a FAR 25.173 deficiency (or pick any of the other axis for that matter). You cannot use a goddamn computer to make an airplane statically stable when aerodynamically it is not statically stable. At least not without changing the FARs. As they are written, transport category airplanes must comply with FAR 25, and I see nothing in FAR 25 that permits computers papering over aerodynamic requirements.
>I do not see how software augmentation can be permitted to compensate for a deviation from FAR 26.173. That section is titled: Static longitudinal stability. Software augmentation cannot cause an airplane that lacks positive static stability to have positive static stability.
I absolutely 100% agree. At no point have I intended to imply that any software solution should justify exceptional certification of an unairworthy frame as airworthy; in case that isn't clear.
I've only tried to communicate the fact that the software can physically remedy the behavior enough to apparently bring it within the capability of a human to control (as has been demonstrated by the non-crashed flights, as Boeing will inevitably argue).
I absolutely, unequivocally reject any argument that the software fix is compliant with regulations as written, or should be accepted as an acceptable remedy in it's current form.
I've had my view somewhat shaped by D.P. Davies in regard to the application of "gadgets" for certification. I think that any such system must be essentially bulletproof, and avoided at all costs by engineering the problem out if possible; and if allowed must unquestionably get the point across to the pilot that they are in dangerous waters.
The 727 had it's Stick Pusher in the U.K. This seems to be a modern retreating of the same path, and to be honest, I'm siding more toward the conservative side you hold of not allowing MCAS to remedy these handling faults. It really would be an example of the slippery slope of normalization of deviance in action.
I really hope all pilots, aerodynamicsts, and engineers take notice, and speak up. I can sit in these threads and try to spread understanding to the nine winds; but I'm some dude on the Net, who has no professional affiliation to any of this, but a big fat keg of outrage because this is not Quality damnit.
The #1 problem with the airframe is that it is not identical to the original 737 airframe. Therefore it shouldn't have been certified as a 737, and it should have required pilot training.
Boeing has been using software to emulate a 737 airframe to avoid the pilot training costs. This approach itself is fraught with problems, and I hope the FAA puts a hard stop to these non-solutions.
The 757 and 767 do not have identical airframes, they share a type certificate. The Airbus 320 and 340 do not have identical airframes, they share a type certificate. I think you do not understand type certification.
The 737 MAX airframe is substantially more like a 737 NG airframe, than a 757 is to a 767. Etc. It sounds like you have a problem with how airplanes are type certified, not a problem with the airframe.
Rather, I have a problem with airplanes falling out of the sky because pilots don't understand what they're flying, and airplane manufacturers trying to obfuscate their machinery. The details of how and why are secondary to that.
I think literally everyone has a problem with airplanes falling out of the sky. Anyone saying otherwise is perhaps a comedian. So I still have to insist you work harder to refine your complaint. The complaint is legitimate, but it's also ordinary. To contribute to the conversation you have to bring something a little bit more than just the ordinary.
Indeed, it might very well be questionable that the 757 and 767 have the same type rating. But then, even though they have the same type rating they do still have difference training. Is iPad difference training really adequate? Who evaluated this? Were there contrary opinions?
I'd like to see the European and Chinese aviation authorities put a hard stop to these non-solutions if the FAA doesn't. At this point, I don't think the FAA is at all trustworthy; they've proven themselves untrustworthy with the way they handled the certification of this turkey and how they were literally the last aviation authority in the world to ground the plane.
As a side note, it’s really unfortunate that we won’t get an NTSB-level (both in detail and widespread respect for the organization) report on what went wrong and how it happened. I’m not sure the Ethiopian and Indonesian authorities will really (be able to) dig into MCAS and all the details of its development as deeply as we’d like. I expect that some of the reporters covering this story will write books (Dominic Gates and Jon Ostrower in particular), and those will be the most authoritative accounts.
The NTSB will issue a report. It's an American made airplane involved in an accident, so they are required to investigate. They just don't have primary jurisdiction over the investigation, per ICAO rules.
This all points up the glaring conflicts of interest in the corporate business environment. Most of us think that Boeing is the business of building airplanes, but as a publicly traded company, its real business is making money for its shareholders, and aerospace is just a vehicle toward that end. The Boeing CEO's steadfast refusal to take responsibility for these two crashes are nothing more than CYA designed to limit or defer financial responsibility for >300 deaths and protect and privilege shareholder value above the safety and well-being of the flying public.
Actually, I think it's worse than that. If it were all about making money for shareholders, they would have made sure this PR nightmare (which will likely result in many lost sales) didn't happen. The long term monetary incentive is to avoid crashes.
The more fundamental problem is that corporations are run by people who, mostly, care more about themselves than the company they are running, and more about this year than the future. Boeing needed to suck it up and accept a couple years' poor sales, in order to get an effective competitor to Airbus' latest. In the long run, or even the medium run, this would give the best monetary results, as well as save lives.
But, it would mean bad financial results this year and next year, and the corporate business environment is biased against that. If the CEO makes decisions which give worse results this year and next, but better results later, he will not be around in his job to enjoy those better results. So, he optimizes for the near term, and rolls the dice (with other people's lives).
The biggest exception to this would be companies where the CEO is the founder, who usually has the ability to retain his job through a bad year or two.
Of course, the CEO should have made the right decision anyway, and accepted the consequences, because people's lives are at stake and anyway it's not like he would starve if he got fired next year. But, if we're talking about incentives and conflicts of interest, that's the real issue, not money vs. lives (because both point in the same direction, of making the plane safe), but short term vs. long term and CEO vs. the company as a whole.
The car industry is not as advanced as the aviation industry. It will come a time, where the level of complexity of a car means that in order to build and test it properly, it will be a ever-increasing easy business decision to take shortcuts.
And we have already seen glimmers of such behavior with all the emissions scandals around diesel cars... the outcome is not immediate death of passengers, but car companies were/are skirting health regulations to increase profits.
The world is full of examples of companies maximizing profits while screwing over their customers. Is that what was happening here? What's the alternative answer? They just didn't think it through? They rushed because of FOMO?
You're right. The real motivation is always making money for the executives and board-members. Bonuses are paid for revenue, not for not killing folks.
>You're right. The real motivation is always making money for the executives and board-members.
What sort of mindset must one have to assume that once a person becomes a "manager" or "executive" they are suddenly immoral or evil? Such an odd worldview.
I guarantee no one at Boeing make the conscious effort to kill hundreds of people for a bonus. Mistakes happen. People of all standing are corrupt. We fix the issues and move on, not cry about executives, board members and shareholders (of which most people on this board are striving to be).
In my opinion, one of the important qualities of an organized company is to spread and dilute responsibility. No individual manager or executive needs to be immoral or evil, they need only be short-sighted or a little greedy or willing to keep their head down and toe the company line.
Each person is only responsible for their own bad decisions. It's likely there is no one person who signed the memo that said "casualties totally okay, make more money!" Instead we'll find several people signing off on several bad decisions, each person myopic enough to be able to honestly claim they had no idea something this bad could happen.
It would be nice to see the buck stop at, say, the CEO, but I haven't seen that happen. In many cases the CEO will claim that they gave blanket performance goals and that the execution of those goals was too technical for them to actually understand, or that those technical details were hidden from them by people below them who were trying to hide things (rather than only deliver the type of information the CEO really wanted).
I don't need an armchair assessment of my worldview, thanks.
It's well known that large US companies offer perverse incentives for their executives. As long as they don't do anything super illegal, there's often no recourse. I mean, recent history is replete with examples of executives optimizing-for-bonus.
I have had the opportunity to observe a very wide variety of businesses in several industries and have noticed the same thing as multiple studies - a high degree of sociopathy among execs. My conjecture is that this is both because aggressive sociopathy will help people rise among the ranks, and also because the structure of business will tend to actively select for those traits (unless active counter-selection is implemented).
So, sure, one does not automatically change by becoming a manager (although there are psych studies indicating high income causing reduced compassion for others), but there is good reason to expect higher "immoral or evil" behavior in higher ranks.
Sociopaths are highly overrepresented among corporate executives. No, people don't suddenly become evil and amoral when they become an executive; evil and amoral people are simply more likely to reach those positions, precisely because they're evil and amoral.
>What sort of mindset must one have to assume that once a person becomes a "manager" or "executive" they are suddenly immoral or evil? Such an odd worldview.
Becoming an executive at a publicly traded company is a joyous and public acceptance of capitalism. Capitalism is objectively bad, given the state of the world.
Capitalism is nothing more than the free exchange among willing parties and cannot be objectively bad.
If Boeing can put out faulty products that kill people will no recourse, that's not capitalism, that's the government allowing these criminals to continue operating unchecked.
That's also consumers accepting the risk and agreeing to pay for a flight. Here's a quick thought experiment in capitalism:
Two relatively new Boeing 737-MAX airplanes crash in relatively quick succession. A critical flaw in the aircraft is exposed to the public. Significant numbers of people refuse to fly in a 737-MAX.
Southwest made a substantial investment in 737-MAX airplanes, and cannot make their money back. They approach Boeing and Boeing gives them a 75% discount on a 737-MAX replacement. Or, Boeing shrugs their shoulders and says, "huh". Southwest cancels all future orders with Boeing and buys Airbus.
Or, Boeing waves their hands and says, "We fixed everything. The 737-MAX is perfectly safe now." The next week, Southwest relaunches their fleet and books 100% occupancy. They negotiate with Boeing about the $millions lost while the fleet was grounded and get back to business as usual. For the next 5 years, a 737-MAX crashes every year killing everyone aboard, but bookings stay high. All ticket prices go up $25 to fund the payouts to crash victims. Bookings stay high. Business continues as usual.
See all the very different directions capitalism can take you? I continually struggle to understand the mindset where all blame goes to the government and there is no individual responsibility for consumer choices. We've seen it over and over with Wells Fargo, GM, Facebook, and the list goes on.
Government can serve a meaningful purpose in the regulation of capitalism. But also, capitalism can and must serve a critical role in the regulation of capitalism.
It's not necessarily good engineering, it could also be selection bias. It depends on how much risk is added by cutting corners. If too many corners are cut, but it didn't fail just due to luck, it's still bad engineering.
Its also bad for perception of the aviation industry if people die from faulty hardware. Many people don't like to fly already because they don't feel safe. These type of incidents don't help calm those fears.
Suppose Bob fires a gun into a crowded area and happens not to hit anyone. I didn't hurt anyone! It's my right as a consequentialist!"
Bob still goes to prison.
Beyond that, engineering is about using a principled approach, by definition. Throwing something together ad-hoc and finding that it happens to work, is not what is meant by 'engineering'.
I think your analogy misses the point that a core scope of engineering is reducing cost. Firing a weapon in a crowded area is not a scope of anybody, bar terrorists. If Bob would be a humanitarian terrorist, him firing the gun in a crowded area and not hitting anyone would be a great outcome for him, even if he didn't hit anybody just by chance.
A core scope of engineering is to make technologies to make life of people better without killing them. To reduce cost is a one of instrumental goals helping business to do more than it might have done without it.
To risk someone's life is a very bad idea, even if it increases profits.
They'll do some rebranding and resume business as usual after the FAA approves their tweaks. Boeing will have lot of inventory ready to ship and there will no doubt also be some lucrative discounts for airlines currently staring very hard at competing offers from Airbus. Airlines will be flying these planes for decades.
Yeah, this is pretty weird and a little scary. I wonder if they are using it as a negotiating tactic - get offered a super low price and then give that price to Airbus to match? Airbus have already enthusiastically said they will bid on this business.
An issue recovering from uncommanded stabilizer trim which is not MCAS-related?
I’m going to assume there’s a real issue they are trying to report on, but Gell-Mann effect is strong with this one and some key information is missing.
Just visit a flight tracking website before booking.
In addition I'm sure airlines not flying the MAX will be using that fact as a marketing tool (i.e. we only fly Airbus etc) so you could limit your bookings with just those airlines.
As an example of this, here in Australia I know one of the carriers is planning to fly the Max some time next year.
If they go ahead with that plan, I will be making sure not to book with that carrier at least for the next few years.
From what I hear, at least one of the carriers here that _was_ intending to fly them this year, has cancelled all the maintenance courses for them... They aren't going to be flying them until 2025 apparently, so all the engineers booked on courses to do maintenance for the 747MAX8 are getting re-booked for Airbus courses...
I may not be interpreting this right, but does that imply they already have the planes, which will now stay in storage until 2025 waiting for all issues to be resolved?
If that's correct, aren't there possible risks for flying a plane that had such a long period of inactivity?
For these planes to fly again they need to be certified safe to fly again.
However since the FAA was one of the last authorities to ground the plane, it really does not matter what outcome they achieve in their current investigation.
Other aviation authorities around the world will take their own time to investigate if the plane should be allowed to fly.
The conundrum for Boeing is it wants the plane back in the air ASAP but I'm sure many of the world aviation authorities are in no real rush.
But in the real world if a flight is $1 cheaper people will choose it regardless of the equipment. Airlines have vastly different safety records and it doesn't seem to change bookings (Allegiant is thriving despite ignoring maintenance).
> Just visit a flight tracking website before booking.
I always do this anyway just out of curiosity. Also if it's a plane I enjoy flying on it makes me look forward to the trip that much more. I have to admit that if I spotted "737 MAX" on there I seriously doubt I would book that trip, even after these issues are eventually resolved.
Which is unfortunate for me personally because Soutwest is my airline of choice. At least for now, I'm alright. The flight I just booked is on two 737-700s, which have my ideal exit row configuration.
Let's hope they do so we can meme it to death. It shouldn't fly any more mile anyway, just out of respect for the victims. It's like trying to recertify the Hindenburg at this point.
If only the FAA wasn't corrupt and did their job of testing, verifying and CERTIFYING the 787 and 737max in the first place rather than outsourcing it to the VERY MANUFACTURER Boeing.
I don't know where y'all went to school, but I was never able to self-grade my year-end exams.
Absolutely unthinkable banana republic behaviour and not a single head rolled (except for the about 350 poor dead souls who had to pay for this corruption with their life).
The 787 and 737max represent two different things.
The 787 is an advanced and innovative plane. It blazes the way in terms of seat-mile cost, comfort, fuel consimptoon, etc.
The 737max is a half-baked response to the re-engined a320 and also the somewhat smaller planes like the a220 and e195-2 that beat the 737 in cost per mile and noise as well as being more comfortable. (Think of the Japanese cars that were small on the outside and big on the inside compared to 1970s American cars)
The 737max tried to innovate as little as possible, and that is where it got into trouble.
I think Boeing still thinks that it can get the FAA to avoid a simulator training requirement but they won't and by trying they will delay the recertification at the expense of airlines, shareholders, etc.
Sadly nothing new. Remember when the Air Force outsourced their fraud and abuse investigation database to Lockheed Martin? Which Lockheed then lost together with the backups?
What you are looking at is the direct consequence of deregulation. Profit over people is the logic of deregulation. The people want regulation to protect the populace and vested interests for profit do not want that regulation.
Did they take the ongoing multi-faceted bribes by incompetence, with no malice involved ?
Hanlon's Razor is useful in some scopes, but imagine applying it to banking, politics, corporate regulation or law enforcement. Depending on your position, not doing due diligent and acting ethically is malice.
Not doubting you, but has there been evidence of bribes in the case of the 737-MAX? I'm genuinely curious.
It may be mismanagement or incompetence, but I don't know if it gets to the degree of malice. Here's how I see it working:
FAA is obligated to provide oversight. FAA is also not given resources to do due diligence themselves (e.g., one inspector may be responsible for multiple airframes, meaning there is no possible way to know the systems in intimate detail to catch everything). To mitigate the lack of resources, they contractually obligate or delegate the oversight to the manufacturer who (presumably) does have that system knowledge. However, it's still the FAA's job to ensure the manufacturer is doing due diligence on oversight. This can easily become a catch-22...without resources to catch the manufacturer errors (e.g., FAA was apparently unaware of software configuration changes), it makes it tough to throw the red flag when everyone is clamoring to keep the schedule moving. They have to defer to the system "experts".
The above is not based on personal experience with this particular case and I'm not saying the FAA is not culpable. I just don't think it's due to deliberate malice. It's the same issue all over the government in terms of oversight. Oversight functions are often the easiest to scale back during budget crunches which, over time, will increase the risk of bad events. Then there is an uproar, the pendulum swings the other way, people want to fund oversight functions until they are once again lulled into complacency by the lack of bad events (which is actually an indication that the oversight is working).
There is likely some degree of bribes in most government organizations. It's a cancer that needs to be removed, but the real question is whether it's at a level that makes true governance infeasible.
I would love to see the government acting as a proper and impartial oversight role. But are taxpayers willing to pay for it? Sure, we can make a case for increased funding to support the FAA after 737-MAX, but what about what about when things seem to be going smoothly? And this is just one sector...would taxpayers be willing to provide the same resources to the FDA, EPA, OSHA, and all the other stretched-too-thin organizations?
I am far less knowledgeble on the subject than you I think, but the FAA just not having resources should end in them not accepting any new missions and keeping a backlog of pending investigations for the next centuries.
The reason it doesn’t work like that is because of pressure on them to keep up wih the requests, and keep the businesses running. That pressure can’t be just their professional guilt, there is I think enough ways the industry affects them so they can’t just refuse to lower standards or accept to delegate part of their work.
That pressure, is what I call bribes. Be it preferencial treatment, exchange of favors, or straight up money going to decision makers to make sure the FAA doesn’t hurt plane makers.
I think you're right. The lack of resources is a leadership issue.
The system is likely biased toward people who don't stop work. Any single decision may have a low probability of catastrophe but if you roll the dice enough times it'll catch up to you. But until it does, those types of managers will get promoted because they "get things done" (i.e. they don't stop for due diligence)
You are of course, correct. My first engineering job fresh out of university was for GE. I worked at a (big) site in the British countryside in the Midlands.
One of our clients was Boeing, and we made a lot of sensors for them: anything from landing gear stuff, cabin oxygen and pressure sensors... you get the picture. Quite a lot of them were ultimately destined for the 787.
I also remember manufacturing sensors for Hamilton-Sunstrand... who would then fit them in 737 rudder assemblies and then sell them to Boeing. The whole subcontracting business was very byzantine in its complexity!
I also distinctly remember Boeing execs coming over very frequently to give updates, feedback, etc.
This is, to my layman's understanding, could be considered an "engine problem" because how the newer more fuel efficient engines with their bigger bypass fans don't fit under the wings of the 737. That's not an engine manufacturers problem.
(It makes more sense to classify it as an airframe design compromise problem, but it's about those big fat geared turbofans not working with the little landing gear legs which let them get the doors down nice and low, but put the wings too close to the ground to fit the more modern engines.)
Why are you talking about the 737 when the response was to the design of the 777x which is the largest boeing not the smallest so talking about the engines under the wing doesnt make sense in the context.
But GE don't make "737MAX8 engines" specifically. They make the range of engines they have, and Boeing (and Airbus and Bombadier et al.) get to choose which one they want, given the constraints of their aircraft.
You _could_ fit a 500cu inch big block V8 into a Miata. Bit it's not gonna handle the same as it does with a 2L 4 cylinder in it, and it's gonna stick out the bonnet.
Engine Alliance (GE and Pratt & Witney) and Rolls Royce are the 2 players.
Because engines are purchased separately from the aircraft, the buyers and engine manufacturers must be in a bind, since the buyer needs to pay for the engines for Boeing to fit them, whether the plane enters service or not.
I don’t think boeing uses Rolls Royce anymore after the Trent 1000 fiasco on the dreamliner. The GE9X is a fantastic engine and far quieter than the Trent engines.
Boeing has many areas of work, including autonomous drone taxis. You'd need to be a bit more specific. Even the venerable 747 is getting work with a new passenger version (Air Force 1) that includes up-rated engines.
“Addressing this condition will reduce pilot workload” is a funny way of saying they're going to stop planes crashing themselves!
We're all quite used to having problems described in the least threatening way but I'm thankful that I've never had to do so about something that could result in people dying.
As a web/game developer, one thing I never understood is the alleged complexity of these control systems. There is no image/voice recognition, no graphics engine, no rocket science. What are they even doing? Or is it all a big lie?
Lines of code, from the internet[1]:
Pacemaker: 100k
Boeing 787: 5M
Chevy Volt: 10M
Modern car: 100M
As a web/game developer, you forgot to include the OS and the drivers in the lines of code involved in executing the stuff you are writing.
The Linux kernel was 15M lines in 2013 [1], more than half of that was drivers. Windows XP was apparently 45M lines, the 2009 Debian distribution 300M lines [2].
So you're into millions of lines just to execute "Hello world" on a PC.
Simple control systems generally don't use an OS. E.g. Arduino is as simple as it gets, here [3] is a 30-LOC example snippet for a HTTP client. But if you dig down into the included libraries (most of it is on GitHub) you'll find that there's (quick estimate) 10-20k lines of code involved in executing those 30 lines. And you haven't actually _done_ anything yet, that's just the infrastructure to allow C code to interact with the hardware with a few standard libraries for math and IO.
Modern cars are complex enough to have OSes [4], with the corresponding code complexities.
In short, web developers vastly underestimate the code complexity of interacting with hardware in a low-level language because it's not something they deal with. The umpteen million lines of OS kernel and device drives are done by someone else, and for people who just _use_ a PC it (mostly) "just works", to the extent that even advanced users aren't aware of the complexity behind it.
Are you suggesting a passenger car runs its own Linux distro? What would be the reason, other than Not Built Here syndrome? (Or the goal to deceive regulators.)
How many I/O devices/drivers does a pacemaker talk to? How many of those are written by/customized by the pacemaker's developers?
If you include the underlying stack, then also my <body>Hello world</body> runs on tens of millions of lines of code (Browser+OS).
> How many I/O devices/drivers does a pacemaker talk to?
A pacemaker would presumably be similar to Arduino: A simple processor, no OS. 100k lines (your numbers) isn't all that much, given that C standard libs are into the tens of thousands alone. Pacemakers have sensors to detect heartbeat, logic to send an electric pulse only when needed, and have an interface where medical personell can hook up a computer to adjust some parameters. So 2-3 IO channels at a minimum (one or more sensors, one or more outputs for the electric pulse, and some sort of wireless interface for adjustments).
> Are you suggesting a passenger car runs its own Linux distro?
According to Quora (last link in my reply above) the most common alternatives are Windows Embedded Automotive, a Linux derivative, and QNX. Why? To avoid writing millions of lines of fairly complex code on their own, of course. Just like a desktop developer saves a ton of work by writing on top of the OS rather than implementing their own OS from scratch.
> If you include the underlying stack, then also my <body>Hello world</body> runs on tens of millions of lines of code.
Yes, but that's the point: The numbers for lines of code in a pacemaker or car includes the entire stack. If you want to compare it to web development, you'll need to include the full stack there too.
* Writing everything three times for three different computers with three different architectures in three different ways (e.g. numerical vs symbolic integration)
* Using obsolete programming languages for the same reasons doctors use latin
* Has to be usable by poorly trained pilots, while tired, on the other side of the planet in complete darkness with half the systems failing.
* Nobody dies if you web/game crashes. The plane has to still be flyable in unimaginably bad conditions like a total engine loss by deploying a windmill (ram air turbine), therefore every scenario is accounted for.
Caveat is we usually find new scenarios by crashing planes, not by thinking about them first. Kind of a definitional thing: if you'd thought of it, it wouldn't have crashed.
Writing everything 3 times: I believe this is not the case for a car or pacemaker. For the Boeing: 5M/3 = 1.3M lines of code. Still makes no sense.
Obsolete languages: Old language (or Latin) does not bloat the size of your code astronomically.
Usability in specific conditions: does not apply to a car or pacemaker. Yes, for an airplane, it can make the design phase lengthy and difficult but the actual size of the code will not bloat astronomically.
From some of the discussions I have seen on HN, writing safety-critical software is not a question of adding millions of lines of code - to the contrary, safe software must be clean and readable.
I see your point. At some point computers simply cannot help us anymore for certain tasks and can actually begin to be a detriment.
Software bugs notwithstanding, too much computer means less human piloting and less actual hand flying experience. It might lead to a lower bar of acceptance to be a pilot, or discourage the types of people who actually love flying (since it's mainly done by the computer). More software can mean more complacency, less attention and engagement, less feeling of authority (by being so conditioned to surrender to the computer) and less "feeling" for what is right and wrong in any situation ("the light is green, that means no issue with this thunderstorm, company policy says fly the route when the light is green") .
It's perfectly okay to let some jobs (or mechanical assemblies) be manual even when they could be made auto.
The FAA can't afford another stuff up this second time around and as such I suspect they will be checking every aspect of the plane in very fine detail.
That could spell more trouble for Boeing.