Hacker News new | past | comments | ask | show | jobs | submit login
Former Boeing Engineers Say Relentless Cost-Cutting Sacrificed Safety (bloomberg.com)
167 points by thereare5lights on May 14, 2019 | hide | past | favorite | 82 comments



Man Boeing sounds like such a disgusting place to work. Granted, I'm sure many, many large publicly-traded companies are like this. But the way the article lays it out these executive initiatives reminds me of mobster movies where they move in and milk an honest citizen's store for all it's worth.

That profits and executive bonuses should take precedence over the safety of human lives is horrible. I'm curious to hear more about why the unions formed, since it sounds like there was a very contentious relationship there.


TFA mentions shareholder value. How exactly do unsafe planes help shareholder value?


People are bad at understanding risk. Tell someone that this change will save the company enough money to bump the share price by $2, but there's a one-in-a-million chance per flight that it will cause a crash that will destroy the company instead. Do you go for it? A lot of people will hear "one in a million" as "basically impossible" and say it sounds great. They'll probably rationalize it further by deciding that you're being paranoid about the risks. And then a while later you have a few hundred dead people and a company with a shattered reputation.


And to further complicate it, the smaller a risk, the worse we are at estimating it.

Investigations after the Challenger Disaster revealed that engineers at NASA thought the space shuttle had a 1/100 failure rate per mission, while managers thought the shuttle had a 1/10,000 failure rate.


From a completely technical perspective, one can understand the cost of this risk with real option valuation techniques. However I’d say this is an inappropriate method when so many lives are at stake.


The value is a gamble:

by making unsafe aircraft Boeing was able to avoid losing customers, and in fact make many more sales because airline’s pilots didn’t need to be retrained and their equipment didn’t need to be changed. Combined that made the MAX a more cost effective alternative to the airbus.

That’s your shareholder value, as long as your unsafe aircraft don’t end up crashing.

Also it’s mostly nonsense: executive compensation is often tied up in revenue and market performance, and they’ve mostly cashed out on that before the financial damage caused by their “work”.


We need to stop using "profit motive", "shareholder value", and "rational markets" as if they somehow provide moral air cover for actions that cause easily avoidable deaths and/or massive damage to the commons.


Everything today is about shareholder value. Nobody, but nobody gives a flying frog's fat fucking ass about the end user/customer/employee/etc any more.


Well, don't support public companies then if at all possible. Public companies = greedy shareholders


On the other hand, end users/customers/employee's don't care about schareholders either. Sounds fair to me.


And why should they especially when companies are acting like this more every day, proving the almighty dollar is more important than anything up to and including human lives. Companies give us nothing unless we pay, with money or information, they are not entitled to us caring for them. Companies, however, should most definitely care about customers, as if they go away so does the company.


Why do the simulators cost $15 million?


The simulators are certified training devices, which means that you can do part of your legally required training in them (and not have to fly an actual jet).

To be certified as a training device it has to be accredited to act similar to the real thing. That accreditation costs money.

That's not to say it couldn't be done cheaper, but until someone does $15 million is what the market will bear. The other option is gassing up a real jet for training.


But if the cost is discouraging the decision of what is legally required training, that is an issue.

Also, who accredits the simulator? The company that built it?


Following up here. For a new jet (new type certificate), there is a minimum amount of required flying to get rated- in this case, you can put many pilots through a $15 million simulator and not put any wear and tear on your shiny new jets. When the option is sim vs real flight, the cheapest option is the sim.

But $15 million is still expensive. A major selling point (possibly the biggest selling point) of the MAX jets was that Boeing convinced the FAA that they were similar to old 737s and did not need a new type certificate. That meant for the many pilots already certified to fly the 737 they would not need to do any flight training to transition. All those currently certified 737 pilots could fly the new jets just by reading a manual on an iPad for an hour. That is a huge savings in training cost even compared to powering up a simulator. The simulators still exist because new pilots still need to get certified need to log flight time- the iPad app is not good enough for them.

Regarding simulator accreditation, I believe Boeing does that themselves. They hand the FAA a stack of documents claiming that they tested the simulator with the same flight profile as a real jet and it handled the same as the real thing. The FAA basically says, "as long as you're not lying about running the test then the simulator is acceptable for training." This is how the FAA handles most certification- they do not do their own testing, they just accept manufacturer test results and documentation. As long as the documentation claims to meet standards then it's all gravy. In general this system works, because if someone crashes your plane and a whistleblower says that you did not actually perform the correct test procedures then you are in hot water.


they cost more than that ! (and note they run the same code as the aircraft, in most cases)


Our industry needs to take the 'Engineer' part of Software Engineering a lot more seriously. A professional certification to use the title is looking more and more important with each incident like this, otherwise developers have no teeth to push back when flaws like this are brushed aside by management to save money/reduce costs/etc.. Cyber security, data ethics,.. many things in programming could benefit from certification, and having to legally sign off on your own work.


First off, this is nonsense: the software was doing exactly what it was meant to (and designed to) do. Hardware engineers chose not to provide multiple sensors to validate AoA, hardware engineers did not provide a human-capable override. MCAS was designed to not be disabled by pilots, because doing so would make the plane a different aircraft according to the FAA.

Anyway I have yet to see a software related “certificate” that isn’t rote-learnable, comically high level, or both.

You also have to ask, what are you certifying?

All of these are fairly trivial to avoid in small programs:

* Use after free * time of check/time of use * out of bounds * numeric overflow

Especially in any kind of test environment where you are being extra careful.

Then there’s the language problem: many engineers have to use multiple languages, some only have to use “safe” languages. Should you require a different cert for each?

You’re also saying “not everyone gets to write software anymore” because the certification won’t be free.

How does open source then work? Clearly people working on the Linux kernel should be certified, so now you’re saying Linux should only accept patches from people who live in countries that can provide the required certs.


> Hardware engineers chose not to provide multiple sensors to validate AoA, hardware engineers did not provide a human-capable override. MCAS was designed to not be disabled by pilots, because doing so would make the plane a different aircraft according to the FAA.

I have two comments. One replace hardware engineers with 'management'

The second when I've read people talk about validating the AOC readings it makes me twitch a bit. Partly because my day job involves firmware that manages a self organizing sensors network. Validation of sensor data sounds easy until you force yourself to conceptualize what the system can know based on the actual data it sees and not your perceptions.

More there is a strong tendency to over focus on the ordinary case. And not all the edge cases. Very often dealing with edge cases is the fundamental problem. Consider designing the front end of a car. The primary design goal is actually 'passengers don't die when you drive it into a tree'

Problem with the MCAS system is it needs to work under all the edge cases, not just when the plane is flying in smooth air while the pilot is pulling the nose up. Like during a hard turn into wind-shear.


I don't mean validation == determine which one is correct, I mean "make sure they agree" and don't trust them otherwise, which, as far as I can tell, is how other Boeing systems work?

I mean there's also the space shuttle system where you have N redundant systems controlling N separate motors (or whatever), and assume that if you'll never have >= N/2 producing incorrect output. That's a "no validation" approach that works by virtue of the correct instruments literally overpowering the incorrect ones.


Supposedly an Airbus plane had triple redundant sensors and two of them failed with the same reading and the good sensor was voted off the island.

I'm walking away with the following explanation. Boeing made a breaking change to the aircraft and did such a good job hiding it from themselves, the PAA, and pilots that they made it impossible for experienced pilots to handle things when it failed.


[edit to clarify: I'm not disagreeing with Gibbon1, I'm literally just curious about these questions and would love to know the answers]

oof, what makes AoA sensors so terrible? Also, it seems like if you have something that isn't particularly robust (pitot tubes apparently also being egregiously terrible in that regard), surely having a less accurate but more robust reference tool would be a good "oh bollocks" back up. e.g. additional redundancy based on different technology.


AoA sensors and pitot tubes are simple, purely mechanical devices that have to function in wind ranging from at least 0–700mph+ and in temperatures ranging from at least -60–120°F. They have to do this for tens of thousands of hours without failing. They have to survive being iced over repeatedly, being impacted by hail and rain, and all sorts of other difficult conditions that make reliability really challenging.

It’s a difficult problem to solve! These sensors are already probably much closer to what you would expect a low-fidelity reliable backup to be than you realize.


Generally sensors that aren't in a protected environment tend to suffer wear and damage.

A similar thing that happened with Birgenair Flight 301. A wasp built a nest inside the one of the pitot tubes. Which was the one the pilot and autopilot was using. And also being used to generate warnings.

I think validating sensor readings is a hard problem. The validation itself becomes another point of failure and confusion.


This isn't a certification problem. It's an engineering ethics problem.

The pass the buck circle jerk is how this design flaw came to exist. Everyone in the engineering organization needs to have the balls to point out systems design errors. Management needs to listen to them and not issue "make it work" marching orders. Regulators need to not delegate their responsibility to the previous.

More than one person could have put their foot down and demanded triple redundancy. That this didn't happen suggests even more safety concerns lurk in all of Boeing's products.


You also have to ask, what are you certifying?

Currently, the avionics software is certified, not the software engineer. The FAA-delegate safety reviewers get special training, but otherwise a bachelor's degree in a related discipline is the standard for an individual contributor's formal education.

There is arduous process in place to help ensure that commercial avionics software is produced to an acceptable level of quality. Problems can still get through, but the process helps weed out a lot of issues that you'd likely see in non-safety-critical software.


The original comment was saying a certificate for software engineering, and my response was in that context - what qualities of an individual engineer should be measured.


Right, I was not arguing, just sharing what is currently done in this field.


Nobody is saying that you can't be a programmer, or a dev, you just can't be a software engineer without a certification.


Why, so the software engineer can be a scapegoat if something goes wrong?

Certify the software, not the person.


Stand up for your code and certify it like a mechanical or civil engineer certifies what they make or don’t wear the title of engineer. It’s time to take on all the qualities of the sobriquet not just the status and the salary that was appropriated.


Exactly. It's just a title. I'm perfectly happy being a professional programmer with no certification. Tinder and even most parts of Google don't really need "engineers". Boeing, not so much.


today, the plan is 'certify the process', not the product. people are assumed to be faulty.


Certifying design engineers is the old school dumb way of doing things. The better modern way is to certify the design process and to provide domain specific training to engineers involved.


You're right, you just can't get a job as a programmer. You also can't contribute to many major OSS projects.


Huh? Why can't you? It's the engineer's job at a company that deploys an OSS project to test and certify the tolerances, SLAs, and best practices (usage documents, checklists, etc.) for something they choose to deploy, whether or not that piece of software was written by engineers or plain old non-engineer developers.


In that case you're deferring the certification of all linux kernel engineers onto the guy who ships it. That sounds like a recipe for not using it ever?

I don't think you're considering how everything works together - in mechanical engineering a certified engineer designs her machinery around other tools that themselves were designed by certified engineers, all using manufacturing processes designed by process and industrial design engineers.

What you're claiming is equivalent to a technical engineer designing something based on equipment designed by - for all intents and purposes - a random person on the internet, and you're now responsible for determining all the mechanical qualities of everything yourself.

I think you're dismissing what it means to say "I am a professional engineer and I approve of the use of linux" in such a context. The only rational approach is to only take OSs (or any other part of your software stack) that has itself been written entirely by certified engineers.


That's nonsensical. An engineering project does not require a provenance chain of engineers all the way down with a really low level engineer certifying that every single grain that went into the concrete was hand sorted by herself. At some point you certify something by taking measurements and tolerances and deliver a safe operating range. There's no reason why an operating system or programming language couldn't live below that level.


this was a systems engineering failure. nothing more. the system is designed to find these and remove them. it has not been determined if this is because of cost cutting or management pressure. could be, but it is also possible it is just an error made by people.


Every 737 has multiple AoA sensors. (Reportedly each computer is only wired to one though).


Only one of which apparently fed into MCAS.


> Hardware engineers chose not to provide multiple sensors to validate AoA.

In effect you've just shifted the blame. Developers working at the lower levels could've pushed back on this harder if they were legally required to. My point is if mechanical and electronic engineers are liable then so should software guys - they need more power to say no.

> You also have to ask, what are you certifying?

An argument could be made that formal verification & ethics would be useful in this context.

> You’re also saying “not everyone gets to write software anymore” because the certification won’t be free.

Degrees aren't free either. Most developers aren't working in aerospace and won't need the rigour.

> How does open source then work?

I'm not talking about OSS. I'm talking about people who work with software that can kill people. If the Linux kernel is used as a technology in these machines then the software 'engineer' who made that decision is legally liable. The blame stops with them.


> In effect you've just shifted the blame.

No. If the bug was in the software (say the bug was numeric underflow leading to crashing) it would be software. In this case the software engineers would have been told "here is your current AoA" and adjust the plane correctly in response. The hardware engineers/designers then provided them with unvalidated data, and I assume no details on the error rate (presumably because that would get the whole system flagged by the FAA as being nonsense)

> Degrees aren't free either. Most developers aren't working in aerospace and won't need the rigour.

"most" != all, literally my point. Also at what level does it kick in: OS developers? If they're using a licensed OS like QNX should all the QNX engineers need to be certified for avionics? How about linux?

> I'm not talking about OSS

So you're saying OSS shouldn't be used in commercial industry?

If you work on linux: that's used in medical hardware, so it seems like all contributors should have your new Certificate in Not Killing People.

But also, at what distance from killing people does this license cease being relevant? You worked on (say) a firewall product on some device, it fails to prevent some attack and the medical device kills someone.

Or the radio stack?

etc


> I assume no details on the error rate

A perfect example of why the title engineer needs to be earned. This is a baseless assumption given that literally anything could go wrong. Sensors could become damaged, circuits broken, etc.. It is our job to plan for edge cases.

> But also, at what distance from killing people does this license cease being relevant?

The last link in the chain: The engineers who put their stamp of approval on the system being shipped to consumers (aka Boeing employees). If you're willing to risk human life on the fact the Linux kernel is acceptable for this task, then you should damn well be able to risk your job title.

If Linux isn't up to the task then why is it being used?


> Sensors could become damaged, circuits broken, etc.. It is our job to plan for edge cases.

Not those edge cases. They have nothing to do with the core competencies of a software engineer and should be offloaded to someone who is competent. Do architects plan for edge cases where the steel beams were actually made of wood?


If the inputs to your system are wrong or nonsensical, you should fail fast.


MCAS was designed by aeronautical engineers, not software engineers. The exact sensors, function, and responses were all specified by aero engineers. The software, as far as we know, was produced using acceptable software engineering processes and functioned exactly as designed.

What, exactly, do you think a PE cert for a software engineer would have done here? Do you think the software people should have refused what the aero people certified as safe?


Require signing off on something as an Engineer that they understand the scope for which this unit is being utilized and that it is reasonably safe according to the best practices known at the time and their own full and complete understanding of the context.

It gives legal teeth for them to say; "No, this has not yet been proven to be safe, I cannot sign on to that". However at the same time a union or guild is required so that management doesn't penalize for being a moral engineer versus a rubber-stamp engineer.


Are you suggesting that these hypothetical software engineers would substitute their opinion for the expertise of domain expert engineers? Or would a software engineer doing flight control software have to first be certified as an aero engineer before touching the keyboard? (Aero engineers do not use the PE system, btw).

How about people doing software for medical systems? Would they have to go to med school, do a residency, and pass medical boards before coding? How would this work?

Because refusing to accept specifications from domain experts and substituting your own is a great way to attach personal liability to yourself for something which you are not trained as an a reasonably knowledgeable lay person, much less an expert. I doubt any software engineer could obtain professional liability insurance if that was the practice.


In such a case the Avionics Engineer (or whomever's actually designing the flight worthiness and characteristics of the overall system) would produce a white paper that fully describes the operational limits of the system under various conditions. Such a white paper (and it's attached references alone) should be enough to create a fully working simulator; it would also be what is used by the software engineer to confirm that the model they have made behaves within anticipated limits; and probably also would require human review (pilots in the sim, running against the real software with simulated inputs).

That's the TYPE of thing I expect to happen in this context.


And how, exactly, did the software engineers at Boeing stray from this hypothetical process, one which is not used in any specialty?

The job of the software engineer is to correctly implement the given spec. As far as anyone knows, that was done.

There is no one, in any industry, that wants their software engineers to say "I'm not moving forward until I've seen the validated medical testing and lab results that this design is based on. I will also need you to run a several year safety trial, provide multiple attestations that the design is correct by end users, regulators, and independent auditors, before proceeding."

What you are suggesting is ridiculously impractical. The specialties rely on one another, and if the controls and human factors people have signed off on the design spec that's what the software engineers should faithfully implement. During implementation, if it becomes apparent that there are states the system can get in to that are not called out in the spec that obviously requires re-engagement. But that's not what you are suggesting as far as I can tell.


Most software engineers don’t even have a college degree in computer science, and now we want them to get a professional certification? Good luck with that.

https://stackoverflow.blog/2016/10/07/do-developers-need-col...


The title 'Software Engineer' is an over used term imo that describes any developer out there.

That title should be reserved for those that have the same credentials as an ME, EE, etc. Someone who is a CS degree holder or a self taught comp dev have in no way the same training as someone with a CE degree.

Engineers are able to take their PE exam in either CE or SE. https://ncees.org/engineering/pe/


> Engineers are able to take their PE exam...

Provided you have a PE credentialed coworker who can vouch for you. That is a chicken and egg problem for most people in an organization with no PEs.


Looks like they are discontinuing the SE exam.


I'm self taught and blow most out of the water with software and hardware engineering. Most college degrees, even for CS majors, are a joke.


If anything you've just reinforced my point that there needs to be a higher barrier to entry when dealing with software that can critically affect human life.


I’m not disagreeing with you but, a change like that is going to upend the labor market and put a lot of people out of a job.


Are there a lot of people working on life-critical software?


certifications are nonsense I hear this everywhere I go any time I mention oh I'm thinking about doing XYZ certification it's like don't waste your time doing that just learn what you're talking about and prove it certifications have a lot of work around sort whatever anybody can study and pass a test doesn't mean you can do the work


Software certs have a bad name because of things like Oracle and Microsoft multiple choice tested certifications that are cheated outrageously, and crap like one-day certified ScrumMaster nonsense.

Probably the better model would be the apprentice/journeyman/master progression from the medieval trade guilds.


But how do you establish the ranks, and who oversees the process and ensures friends don't up-rank their friends?


Upranking/Promotions: At search arm of alphabet don't they do reviews by not directly involved employees - eg relevant enough to review your perf - distant enough to avoid some bias. This may have been interview only, and may have been even further limited in scope than across search arm itself.


So one so outdated that no discipline uses?


anti degree anti cert anti everything arbitrary $ barrier


With this power, comes serious responsibility. If an engineer signs off on a design that they know to be fundamentally unsafe, that engineer has liability regardless of the internal pressures placed on them.


The incentives just aren’t there though. Performance reviews are all about impact, and engineers who focus on quality instead of impact are worse off in career advancement. Likewise with the incentives on companies; companies that do slow but careful development get overtaken by faster moving competitors which reward impactful employees.

And it isn’t even clear to me that most consumers would prioritize security / stability over feature-sets when choosing software.


How does a software engineer know that something is safe? Do they need to be aerospace engineers as well? Do they need to go over the full schematics of the hardware their software is running on?


Yes!

If you are in a context where your software has significant implications on the state of a physical system, you must be willing to work with the other engineers to make sure you've accommodated all the eventualities you can.

Part of being an Engineer is knowing what you don't know, yet following through and making sure you connect with the people who do in order to ensure all relevant questions are asked and answered.


So if you contribute to the linux kernel should you be an aerospace, medical, vehicle, ... engineer as well?


Short answer:

IDE's, compilers, and other tools of the software engineering profession should absolutely not be treated as the tools of a privileged class.

However, certain contexts in which software can be applied should be subject to an expectation of higher scrutiny, and entail compulsory cross-disciplinary knowledge acquisition and application of expertise.

You want to hack kernel code? Knock yourself out!

You want to make that code responsible for operating an airplane? Bust out The Complete Engineer's Guide to Jargon, and don thy pocket protector, because it's gonna be a bumpy ride to build the consensus that that piece of software you wrote is is actually the right tool for the job.

Long Answer: You're mot getting it.

It's not about having/being a PE.

It's about knowing when the stakes are high enough where you need to talk with them, and make sure that you are making full use of their expertise in their subject area, and that they have full use of your understanding of writing software to ensure all your bases, behaviors, inputs/outputs, and edge cases are covered by tests, implementation, and appropriate requirements.

That means, for example, looking at the MCAS requirements, scratching your chin, picking up the phone, and calling the System Engineer the requirements percolated down from to figure out what happens on the path from sensor to entry point? What other pieces of data may be appropriate to include?

If you have no tolerance for asking meddlesome questions in the process of making a system which as written, has the capacity to pitch a plane into the ground, you are probably not ready to be put in charge of that implementation.

Write your Linux kernel in your own time however you want. But there is a time and a place for everything, and implementing hacks in a flight control system (which you as an individual should know is covered by regulatiin), is not the time to "Yes" Man. If you spend 90% of your time talking to people until the design is so solidified enough that everyone will have the schematic pop up in their dreams until the end of their days; then you're doing it right.

And at the end of the day, if you throw up your hands and hit the "Fine! I won't question this!" button regardless of that situation, and code that piece of software that enables an unsafe design to kill 300+ people... Then congratulations, you just learned this life lesson the hard way. By being a contributory factor to the deaths of 300+ people.

Engineering done wrong kills people. Fact of the territory. Please don't skimp when it truely matters. Even if no one may find out in reality, go into every project assuming once you die, you'll have to answer for every decision you made in life and ask, if I was taken to task for this, would I truly be comfortable that I've asked all the right questions?

If you can sleep at night without doing that on a Flight Control System... Please don't seek employment in the writing software for the aviation field any more critical than the infotainment system.

I'm not trying to be elitist. It just is that complex, and the consequences of a shoddy job are that catastrophic.

You can't fool physics. She is the coldest, most evil bitch imaginable.

I hope this fully answers your question.


>>> If an engineer signs off on a design that they know to be fundamentally unsafe

The problem is you don't know its unsafe. It sometimes takes a disaster to shed lite on a problem. Engineering and design is hard.


Couldn't you make this argument about a mechanical engineer?


This is already the case for mechanical engineers, or any other Professional Engineer.


This seems like another case to stop H1B visa system. We are sacrificing quality over Q4 earnings by outsourcing skilled jobs to unskilled cheap body shops.


That has more to do with outsourcing and haphazard disintegration of product requirements than H1B's.


"Eschew flamebait. Don't introduce flamewar topics unless you have something genuinely new to say. Avoid unrelated controversies and generic tangents."

https://news.ycombinator.com/newsguidelines.html


Uh what? H1-B’s are literally the opposite of outsourcing.

Yes there were outsourcing companies that abused the system (harming legitimate companies), but if you think outsourcing companies are going to magically start recruiting top of the barrel engineers you are sorely mistaken.


This is exactly what Agile succeeds in. This story is going to be an 8 point story? But Jim said he could get it done in 2.

Have a safety concern? The product owner doesn't think it's a priority.


What you describe sounds like Scrum done very poorly, not "Agile" in general.

>This story is going to be an 8 point story? But Jim said he could get it done in 2

If you treat individual ticket SP values as measures of time, you've already completely lost the point of everything.


Does Boeing use agile when deciding things like safety features on planes?

if not, is this relevant to the story?


Do they? I have no idea. I'm not an employee of Boeing.

Have I seen agile/scrum used to cut corners on and dismiss engineering concerns? Absolutely.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: