> But Boeing counters that it has both "additional protection mechanisms" in the CIS/MS that would prevent its bugs from being exploited from the ODN, and another hardware device between the semi-sensitive IDN—where the CIS/MS is located—and the highly sensitive CDN. That second barrier, the company argues, allows only data to pass from one part of the network to the other, rather than the executable commands that would be necessary to affect the plane's critical systems.
Well geez, it's a good thing that there's no class of bugs in which a certain amount of data, maybe more than the receiver was expecting, or terminated in an odd way, overwhelms the receiver in such a way as to cause the data to then be interpreted as commands which are run in place of the receiver's code...
Boeingspeak: "IOActive's scenarios cannot affect any critical or essential airplane system and do not describe a way for remote attackers to access important 787 systems like the avionics system."
English: "This random guy on the internet discovered real vulnerabilities and we're scrambling like hell to fix them. We hope this carefully worded statement written by lawyers will keep the public and the FAA off our back until we can fix the problems."
This is Boeing we're talking about here, the English version would be more to the tune of "We're not fixing this until multiple planes crash and even then we'll try to weasel our way out of it."
"Experts are still not sure of the root cause of the malfunction in the data unit, but subsequent software changes by Airbus mean any similar error in the future won’t lead to another terrifying nosedive."
Isn't ADA required? I worked with Air Force C2 platforms for east European NATO programs and the systems I was working with (one was called ASOC) was, at it's core, ADA. We also had civilian ATC in our facility and a few of those devs played on our softball team all writing ADA as well. But, it was part of the design spec and non-negotiable per FAA, Air Force, etc as far as I remember.
I've done security auditing of ADA and C avionics code. (including for DAL-A components). As a language, I'd take Rust any day (in terms of security/robustness properties).
However, the rust/LLVM compiler pipeline is nowhere near mature enough for use in high-criticality environments.
> However, the rust/LLVM compiler pipeline is nowhere near mature enough for use in high-criticality environments.
I love Rust, but I wouldn't want it to fly a plane I'm on. For instance, here's a bug in LLVM that Rust developers happened to discover - before they disabled their use of the buggy features, Rust/LLVM was producing _numerically incorrect_ code. https://github.com/rust-lang/rust/issues/54878#issuecomment-...
> However, the rust/LLVM compiler pipeline is nowhere near mature enough for use in high-criticality environments.
I don't know about Rust, but LLVM isn't mature? Nearly ever program running in the Apple ecosystem was compiled using LLVM. Swift is compiled using LLVM. Since Xcode 4.2, Clang is the default compiler. So iOS and macOS apps are built with LLVM. I'd also wager a guess that Apple uses Clang to compile key system code (Darwin, macOS, and iOS, etc) as well.
At least a lot of it was due to having to specify around the weird quirks of existing compiler implementations. C has far more undefined behavior than it needs for optimization, and modern programming language theory and compiler research has shown that most undefined behavior isn't really necessary at all. [0] For example, safe Rust has extremely close (and sometimes superior) performance to C, and has almost* no undefined behavior at all. [1]
* Some is inherited from LLVM, some is bugs, some is dumb corner cases like using kernel syscalls to overwrite the program's own memory with random bits.
Name one example where UB is necessary for useful optimizations where simply having unspecified values or platform dependent behavior wouldn't be enough.
MacOS and related don't run the airplane. Pilots have an iPad for some checklists and other things where "have you tried turning it off and on again" works as failure prcedure. If a failure, caused by a compiler bug leading to an overflow in a rare flight situation, happens on an airplane system a reboot is not really the option you want to depend on.
I write software for in cockpit ipad apps - they are becoming more and more integral to the workload of pilots, everything from route planning, guidance, checklists, manifests and ordering fuel.
There was actually an incident posted on HN a while ago where IIRC an ipad did a software update or similar at a critical moment, and caused complications bad enough to get an incident report published on some aviation agency's website.
That might work in a stable situation - many (not all) planes are good gliders. (I.e. somewhere I read that the A380 is a quite good glider, among other reasons since the wings were designed for larger versions; an F-16 however won't really glide)
In a critical situation this is impossible, though.
That will not always work very well, inertial navigation systems often wants to be completely still for a couple of minutes to calibrate themselves.
I have heard about just flying with no G-load for a while and getting some basic instruments working. But I dont think you will get full nav functionality back.
Not mission-critical, must-never-fail mature, no. All of these systems contain tens (if not thousands of bugs), some of these probably come from compiler issues sometimes.
I'd take a compiler backend that compiles large parts of the modern software ecosystem every day and has countless programmers using it over some niche compiler that is used only for a vanishing fraction of all applications but boasts some kind of official stamp of applicability in safety relevant contexts.
I'm not so sure I would. For one, Ada is in no way a niche language. For two, C (and C++ by extension) simply were not designed with safety or reliability in mind. I happen to think that it's fair to say that they weren't designed at all, in that C compilers existed before the first language specification, and the language specification was forced to accommodate their quirks. Cargo culting about C is fine and all, but it often leads to absurd claims that it is more suitable in any arbitrary domain than languages and compilers designed specifically for the domain.
I think I'd prefer that safety-critical code be written in languages that don't allow pointer arithmetic except in scenarios that can be proved via static analysis not to introduce multiple memory aliases. Expecting program writers and compiler authors to get these sorts of things right in C/C++ is just unreasonable.
I'd argue that you would probably never find a bug such as that in a certified compiler, not because of the certification, but because far, far fewer eyes look at its output. I have worked in safety critical software and used a certified compiler. It had enough bugs that I personally found four in my first year.
Coq is great. Now the problem is developers having the sharpest, greatest tools but not understanding the spec, or working to a stupid spec and not calling it out.
Every exploit payload is comprised entirely of data, so I think you’re even giving them too much credit there...
The thing that triggered me the most was that they got the engineers who wrote the code to test it, and report back that their own code was fine. From the sound of it they didn’t even test the vulnerability, they just did an external test, without specifically testing the segmentation controls or the components in question.
They probably rely on network segmentation to keep the various components separated. If they consider the segmentation to be 100% effective, there'd be no need to do the kind of in-depth testing you advocate. I don't think it's justifiable to assume that segmentation doesn't have its own bugs that could be exploited.
Yeah, that seems to be what their attitude is. I have a couple of major issues with that though.
1. Relying on segmentation to protect vulnerable components is not a reasonable standard operating procedure. Segmentation is supposed to be an additional layer of protection, you’re also supposed to secure individual components.
2. It seems as though the researcher believed there may have been a way to bypass some of the segmentation (the segmentation between the medium-sensitive network and the highly-sensitive network). The article kinda implies that they didn’t test that layer of segmentation, that they only tested the most external layer (the segmentation between the non-sensitive network and the medium-sensitive network).
The whole response comes across as dismissive spin, that they hope will be consumed by people who don’t have a particularly sophisticated understanding of network/application security.
Data dioses don't prevent malicious data that exploits vulnerabilities and takes over from being transmitted, they only prevent the malware from communicating back.
Data diode can be put either ways, with different results:
* case 1, you allow traffic to only go out:
This way, nothing can come inside the system, but the system can export data.
Here basically, confidentiality is of secondary importance, but integrity is crucial.
It is the Biba model.
It can be seen on Command and Control systems for critical industrial installation for example. For example, with power plants C&C system must avoid to be hacked, but exporting to other systems data such as their power output and operational condition is generally required.
* case 2, you allow traffic to only go in:
This way, the system can ingest data from the outside, but nothing goes out.
Here basically, confidentiality is primordial, integrity a bit less.
It's the Bell-LaPadula model.
It can be seen in Military intelligence systems for example. Here you collect pieces of information and you make decisions on them, and all that must be kept confidential.
To summarize:
* One way: you enforce integrity
* The other: you enforce confidentiality
As an ending note, data diodes are generally pretty simple: basically you take a fiber with TX and RX link, and you cut one. There are a few more tricks (UDP only, sending multiple times because you don't have ACKs, static ARP tables, tricking the NIC into thinking it's up without signal), but that's the core of it.
No that’s exactly what it does, it enforces one way communication from your high privileged domain to your less privileged domain. And your IFE or crew system is in the less privileged one.
My $0.02: I came across Boeing's documentation for their "Boeing Update" solution. (think Windows Update, but for 787s).
It described in detail how the planes are updated with new firmware for the avionics, entertainment system, and the engines. I was shocked to learn that the 787 uses a lot of COTS kit internally, such as standard WiFi and Ethernet connections. There's an RJ-45 jack at the front landing gear accessible from the outside of the plane at any time!
It was by far the best technical document I have ever read, of any type, ever by at least a factor of ten. It was so good I read it like a novel. Twice. The security design was amazing. The PKI was amazing. The patch management was amazing. The network design was amazing. The documentation was amazing. My estimate was the the document alone would have cost multiple millions of dollars to write, not including any of the engineering work that went into the solution itself.
Boeing's engineers thought of everything. EVERYTHING. This scenario was catered for:
- The plane is rented, not owned.
- The IT department is outsourced.
- Aircraft maintenance is outsourced.
- The plane is currently on the ground in a country that is hostile.
- A critical update has been released, without which the plane is unsafe to fly.
This is one of the scenarios that is literally spelled out, in plain English, and you're left completely certain that the update will be safe and secure despite all of that.
The security is just nuts. Everything uses explicit, hardcoded whitelists. TLS is bidirectional (clients are verified by the servers too). Patches must be quadruple signed by Boing, the parts manufacturer, the FAA, and the airline at a minimum to be acceptable. There are physical connection breakers and PIN codes on top of that. There are two nested VPNs on top of the already encrypted WiFi. It just goes on and on.
No part of it left me thinking they could have done better. I've used that document as a template for my own work, and it's the better for it.
Since then, I've insisted on flying 787s whenever possible, because I'm certain that the engineering effort that has gone into those things is about as good as humanly possible.
Reading that just makes me think about how "wild west" self-driving cars seem to be, and most people seem to think that's acceptable.
We have Teslas that don't have anywhere near this kind of security or redundancy "autopiloting" themselves right now on highways.
People seem to be ok with that because it's a car and not a plane. But the way I see it, there's thousands of those cars on the roads, and a software bug across the fleet could cause just as much damage as a plane crash.
>We have Teslas that don't have anywhere near this kind of security or redundancy "autopiloting" themselves right now on highways.
Not just Teslas, modern vehicles in general. I would say that in terms of security Tesla is probably doing a more bang-up job than the other automakers; it was only a few years ago that Charlie Miller and Chris Valasek remotely killed a Grand Cherokee on the highway.
As an automotive security engineer, I have to interject. Miller and Valasek hacked into a car that was very far removed from what we consider modern automotive electronics. Any truly modern car will have decoupled networks with firewalls in-between. It will have intrusion detection systems, secure boot, signed code, encrypted memory, will communicate critical information via TLS etc.
The Jeep hack (as well as their Toyota and Ford hacks) was exptremely important, because it put public pressure on the less technologically capable OEMs to get with the times and implement a (somewhat) secure electronics architecture. As someone who shares the road with those shitty cars I'm thankful for that. But even at the time of the hack, there were many OEMs whose cars were not anywhere close to that vulnerable and the industry hasn't stood still since then.
And since you mention Tesla, I also have to point out that they are one of the worst at security. E.g. they have an RJ45 port behind the dash that you can just plug into. It used to be that this gave you complete access to everything, but people abused it. So Tesla made it a little bit harder, though not impossible, to get into their system. Tesla also has a lot of bugs in their smartphone integration that allow "fun" exploits like remote unlocking.
That sounds like a fascinating read -- do you happen to have a link to the document (or even a title for it) if it's publicly available? A preliminary Google search turned up nothing
> "No part of it left me thinking they could have done better."
Do you have the knowledge required in order to judge that?
> I've insisted on flying 787s whenever possible, because I'm certain that the engineering effort that has gone into those things is about as good as humanly possible.
Have you compared it to the other Boeing airplanes? To Airbus airplanes? Bombardier? Embraer? The new Mitsubishi airplane?
> My estimate was the the document alone would have cost multiple millions of dollars to write
I get that technical documentation can be incredibly good, but even picturing the most thorough and well-written documentation book I can, what could possibly make it worth "millions of dollars to write" when writing an actual book hardly can top $100k even by going all out on expenses?
I imagine most of the cost in the OP isnt the physical writing of the documentation, but rather in the time gathering the documentation, interviewing the engineers and developers. Then on top of that, might need to employ separate engineers to review designs/code and verify that the information is correct and accurate. Effectively a design/implementation audit. If done truly independently, I can imagine this gets quite expensive very quickly.
I'm sure the number of lines of code involved in a 787 is probably well into the 10s of millions. Then you've also need help of Electrical/Computer engineers, mechanical engineers, aerospace engineers and who knows what else. This isn't a simple matter of plopping a tech writer in front of a computer and "punch out the doc".
I dont imagine the systems are simple enough that you have a single engineer or developer looking at the relevant systems. I can easily see a team of well more than 50 being needed to produce such documentation, and conclude the cost of production for such documentation being $1M USD to probably being way on the low end, maybe even by an order of magnitude or more.
They engineered a marketing system that allows Boeing to price discriminate, one which artificially makes the plane worse. That should tell you everything you need to know about what's happened to Boeing's engineering culture and who's running the company.
Then again, the more complicated something is 1) the easier it is to break 2) the more vulnerable it is, no matter what the designers think. It gets to a point where full testing cannot be done.
Hmm looks like it is job of a single or 2 authors who were really in love with their jobs(most probably contractors). having seen interviews of different people from boeing, i dont believe they can do anything good unless there is money
> My estimate was the the document alone would have cost multiple millions of dollars to write, not including any of the engineering work that went into the solution itself.
Well... some completely random stranger with no credibility and no supporting documentation said it on the internet, so it must be true.
The 787 is the greatest technical achievement of mankind, bar none!
I feel like this is the kind of thing that would've been completely ignored by everybody except for a handful of concerned hackers had it not been for the recent media outrage against Boeing (and in my opinion absolutely deserved).
I guess the question is how bad is it (from the article it's hard to tell exactly, but it sure doesn't sound great)? And another question is how many of our systems that we rely on, from bridges to airplanes to traffic lights, are just actually very insecure but either nobody notices or nobody exploits them?
That said, Boeing's abysmal PR and completely blanket "it's not our fault" statements make me assume the worst here. I have no idea how that company will ever earn back my trust. But maybe they have enough regulatory capture, much like Equifax, that they just don't care.
I can tell you traffic lights are extremely insecure. Last month there was a traffic light that was turned the wrong way, such that it was impossible to tell if the light was green. So I climbed the poll and turned it to the right direction. Another fellow pedastrian thanked me.
A bad actor could do anything from a DOS (positioning it the wrong direction) to tampering with the bulbs (for example swapping out all the greens with reds).
The reason most society doesn't collapse is because we assume most people are good actors. Unfortunately once your device is hooked up to the internet you vastly increase the odds of dealing with bad actors and have to spend more time and money securing against bad actors.
I live near a big avenue that has 12 lanes (6 for each direction). There are two local lanes (that you can enter or exit only on a few places).
It has been this way probably for longer than I'm alive (30 years). There is a huge traffic light for the central lanes with a visual timer (like all others in this avenue: green horizontal lines that fade one by one when the signal is closing soon) and a smaller one for the local south lanes on a given crossing.
After some road paving, pedestrian crossings were made accessible, but for some stupid reason changed the behavior of the local traffic to a deadly combination. The speed limit is 60km/h for both central and local lanes (but people drive from anything between 50-100km/h).
Previously for 30+ years: everything turned green/red at the very same time.
Since around some date before 1 January: central lanes turn green first. Smaller traffic light for the local lanes turns green after 10 seconds. Local lanes turn red a few seconds after the central too. In some places, it might create an incentive for you to swift to local and back (while hitting the gas pedal) after 30s-1min if you see traffic ahead and that you can't make it in the central lanes - not sure if I consider this a feature or a safety risk.
First time I passed by after the change I didn't stop (5.a.m. new year's eve) because I was watching the central lane semaphore and it was too late when I noticed they changed it. A second time, I had to hit the breaks.
During the first weeks after the changes, I saw a dozen or more cars either running the red light without stopping or after waiting for the [central] lane bright traffic lights go green.
Six months afterward, a reckless military driver killed a disabled woman in a wheelchair nearby. Probably unrelated but I'd be surprised if a related road design fault played a role.
> I can tell you traffic lights are extremely insecure.
All municipal infrastructure tends to be. It's usually implemented to a cost and security considerations are completely absent.
You can bet that in any given city, all those street light control cabinets are keyed alike and the city has no true idea who has keys and who doesn't.
This exact problem applies to so many domains it's literally for lack of effort that they haven't been exploited yet.
In India I found that in Bangalore (a city with far better infrastructure than most other), for a lot of intersections the traffic lights are toggle switches that some cop flicks on and off every so often. There is no lock on the switch cabinet.
All those cabinets use the same key across cities as well! Otherwise FEMA and other services would be unable to function. For the same reason all LEO handcuffs use the same key, so that any officer could release any handcuffed individual.
Edit: Some googling for links let me to this video, which seems relevant:
I'll Let Myself In: Tactics of Physical Pen Testers
> tampering with the bulbs (for example swapping out all the greens with reds).
Traffic authorities use coloured bulbs where you are? Weird.
Where I am, as far as I'm aware the colours always come from a gel in front of the lamp. Furthermore, the red lamp holder is larger than the green and amber ones.
I think they still use gels on white LEDs for many of them. I've had a look at a few that at least did that. It's possible they've switched to dedicated colour phosphors, or that the majority of them have been that way for a while.
I think many of the lights here are also retrofits from incandescent traffic lights.
All the traffic lights here in Australia are honking great big metal poles that you'd need some serious equipment to reposition. Swapping bulbs would be a big operation too.
What country are you in that has rotatable traffic lights?
I'm in the USA. This particular incident happened in Washington, DC. I also saw a driver do this once in Palo Alto in Silicon Valley.
Anyways, even if lights are too big to climb up you can put on a yellow vest and get a flashing light on your van and people wouldn't really think twice.
I hopped on Google Street view near the central core of Sydney, and it looks like at least in certain areas you all also have lights like that are close to the ground and easily accessible. I'm guessing in more suburban or rural areas the signals are higher up (which makes them easier to see from a distance while driving faster).
Even if the traffic lights can't be rotsteted it's relatively trivial for an bad actor to break it in some way and hanging up your permanent green light at night and then watch the chaos.
As gp said: The amount of such bad actors is low. And gains from an individual hack are low and there's a chance of getting caught.
"Relatively trivial" must be relative. Traffic lights where I am require a cherry picker in the intersection to get at.
This is like hacking servers. If you can get all the way to physical access with the device, of course it's exploitable. But that doesn't actually say a lot about how secure something is.
I imagine the quality of the lights around the world differ. If you can climb up and adjust it, these aren't the kinds of lights I'm thinking of.
So you'd need some equipment to do anything to them. And that equipment would have to basically shut down the intersection. So do it at 4am and hope that nobody drives by for the half hour you're there.
It's pretty pragmatic to take a 'wait and see' approach to dealing with bad actors. You start doing that in your town and there will be some changes once budget adjustments are made. LA had a problem with people tagging signs on the freeway, so now many are wrapped in razor wire.
>I have no idea how that company will ever earn back my trust
Millions of ongoing safe flights? I dunno. I feel like they're getting savaged (which they deserve... to a point... but we will cross that point I am pretty sure, if we haven't already...)
The thousands (tens of thousands?) of safe flights per day don't make the news. Boeing has been a pioneer in the safest form of transportation in existence. Mentour Pilot (an active 737 pilot on YouTube) goes into detail about why he's not concerned about Boeing (any more than he's concerned about Airbus).
I can also share a story from my (late) father who worked at Boeing from 30 years (and was working at Boeing during the MAX crashes). I asked him why Boeing let the 737-MAX debacle happen. These were a dying man's words (paraphrased): "Boeing wanted to ground the plane after the first 737-MAX crash but the FAA refused until after the second crash. Boeing did not have the authority to unilaterally ground the planes."
Also worth pointing out that there are 4 times as many 777s as there are A340s. Doesn’t mean A340 isn’t great, just that the odds of no accidents go up with fewer flight hours.
The popularity of A320 and 737-NG makes their safety records particularly impressive.
It also speaks highly of the level of engineering and testing required to determine the limitations of each an every part that must be maintained for safe flight.
I was always irrationally nervous flying into SFO, worried about precisely what ended up happening. We should all be glad the plane mostly held together through the cartwheel, saving many lives.
True, it's not part of the certification process, but I do know that modern jets airframes are made to be as... um... "flexible"* (? searching for the word here...) as possible in catastrophic situations. An older airframe probably would have disintegrated under the same forces.
Luck and good fortune also played a huge role of course. What the pilots did was unconscionably neglectful. CFIT is often (usually?) fatal for all passengers.
*EDIT: It's similar to software that's written to be extremely robust that encounters an unexpected error (or set of errors). The software wasn't necessarily designed to handle it, but sometimes it can nonetheless.
That isn't really something engineers can design to. What happens is the aerodynamics group calculates the maximum loads on the airplane. This is increased by 50% and called the "ultimate load", and the parts are all designed to not break up to that load.
Parts stronger than that are overweight, and weight is the enemy of all airplanes.
After parts are designed, they go through an independent "stress" group which verifies that the parts meet the strength requirements.
I think what he is referring to is the modern use of composites and carbon fiber which are both stronger and lighter than older aluminum or titanium components. Titanium, for example, is a very hard metal that will fracture more readily than a softer composite that is able to dissipate the stress better.
Boeing probably doesn’t have the ability to ground their fleet with legal force, but if they said, “these planes aren’t safe, don’t fly them until we can investigate and fix,” most airlines would listen and I don’t think the FAA could somehow block this.
If that quote is so[x], that casts a very deep shadow over the FAA. They've gone from handing off their responsibility for validation to outright rejecting the safety advice of the maker of the aircraft.
[x] and I don't mean that to doubt you but I hope you understand I can't be absolute on this without verification
Keep in mind I don’t know exactly how those conversations went between Boeing and the FAA, nor do I know exactly who my dad was referring to when he said Boeing (the head of engineering? head of safety? A group of people inside Boeing? Was Boeing unified in its opinion or was this concentrated in the area of engineering my father worked in?) Grounding an entire class of plane is not a decision made lightly because of the huge financial and political implications. My takeaway is Boeing leaned towards grounding and the FAA did not and on balance the plane stayed in the air while Boeing worked on the fixes.*
*EDIT: This is probably a conversation that happens after every crash/major-incident and hindsight is 20/20.
>how many of our systems that we rely on, from bridges to traffic lights, are just actually very insecure but either nobody notices or nobody exploits them
I write software that is critical for public safety customers (think police/firefighters). Maybe this is just my perspective having left a defense company but it is terribly insecure. The “secure” version of our product was obviously an after thought, it was poorly executed and i dont think it’s even used widely. And my company dominates this market, so the attack surface is huge
How likely do you think it is that a company that manufactures traffic lights and incidentally builds the software to control them would cough up 40+/hr to have someone independent come in and vet the software that was written for 12/hr and seems to work just fine?
Part of this is just a poor understanding and pricing for software consultancy - along with some absolutely terrible actors in the HPC realm. Ideally your HPC will come in and spend a fraction of the time vetting software that the dev team built, but occasionally you get a fraud who works 8/5 for a month at 120/hr and delivers nothing but vapor in the end.
Maybe some security consultant industry group could set up a certification program, though all the times in memory I've seen software related certification it's been
It seems insane that all this code isn't just open source by default. No one's going to be able to rip off airlines by stealing it, you still need to have a company that, you know, sells planes.
Keeping it closed seems like a full admission that "there are probably a bunch of bugs in here and we don't want people to see them"
Most executives care about profits, security is simply not important. Even if an engineer explains that he needs more time to properly secure something, he will be asked to cut corners. Then, when shit hits the fan the executive will make a "pikachu face" and engineer will get fired for not properly implementing security.
Having met a fair number of top executives I don’t feel this is true. People at the top do care quite a bit, and put personal pride into their company being good. But all low level decisions are made downstream, and middle managers are far less personally invested. Reactions to bad press are reactions. Hard to say whether it reflects anyone’s reality.
They are good at giving lip service but I rarely see anything more than that given. When it comes to budgets and hiring, one might say when it comes to putting their money where their mouth is, then you see how much they actually care.
Hmm sounds reasonable but I think not. Management is hard and information is imperfect to all actors. With a god like view you would probably conclude that there’s a variety of people who should be removed from an org for its health and performance. The top may be accountable for things, but I wouldn’t think it’s correct to blame them personally
From everything I've heard, this changed around 1997-2001 when Boeing merged with McDonnell Douglas and moved their headquarters 2000 miles away from their engineers. Old Boeing (the first 80 years) was run by aerospace engineers. New Boeing is not.
The real reason is that there’s realistically only downsides for the company. The public doesn’t know what “responsible vulnerability management program” means, but they certainly know what “major vulnerability found in Boeing code” means. So doing that will only mean they gain nothing, or take reputation hits.
Open sourcing the codebase doesn’t mean that all it’s vulnerabilities will be discovered, and it’s certainly not the only way for a company to manage them. Out of all the options that are available, it’s really one of the worst ones from the company’s perspective.
I'm very pro OSS, but I don't think this actually makes sense. Other open source, non-free software is usually a general purpose piece of software expected to be licensed out to many users and run in a variety of environments, for a variety of use cases. The flight control software on a 787, on the other hand, has one intended end user (who is also the owner), runs in one rigidly controlled environment, with a fixed set of use cases. There's no benefit to the public domain, and avionics is such a specialized discipline that the set of people capable of doing useful error or vulnerability analysis on the software probably highly overlaps with the people Boeing is paying to do said analysis in a private capacity (as they're mandated to by regulations anyway). I just don't think getting a bunch of eyes of random, generalist security researchers on it would meaningfully improve the safety and security of the planes.
Maybe open-source software is today's standard, but I imagine it wasn't back when those airliners were first designed.
Now, imagine they did open-source their code: I imagine those codebases are humongous and it would take months if not years for security issues to be found by the community. How do you make sure that a bad actor doesn't find a flaw before the community does and uses it?
So open-sourcing sounds totally unrealistic to me.
I think that's the exact point the parent was making. They don't want it open-sourced, because they already can access it, and open-sourcing would only mean that good guys will have access to it too.
I don't think this is a realistic depiction of the threat model of people who would hack commercial jets. Sure, state sponsored APTs probably could access it if they wanted, but none of them are in the business of crashing passenger planes. Maybe tracking them or grounding them or stuff like that, but they already have those capabilities through economic or bureaucratic or military power. The "bad actors" of concern here are the same types that have brought down passenger planes before - small, independent groups or lone wolves. And they are the ones who would benefit from the code being open sourced.
The ones who would benefit from it being open sourced are the passengers and plots. Your excuse is mitigatiable, release it picewise, starting with the customer facing code to trusted outside groups.
Ultimately your argument applies to all life or death code, even code we put inside our bodies, which as you mentioned, is also highly specific and specalized.
Because the bar is higher there should be less review is a contradiction.
APT (often not a State) conversations are pointless where plausible deniability is ignored as a desirable property.
Open-source obviously doesn't imply security as it can be seen by security-critical open source software like openssl which repeatedly showed that it contained very critical bugs for a long time without anybody noticing it.
No one really would benefit from open sourcing it without being able to (security) test it in realistic scenarios. Obviously security researchers would profit in discovering bugs which may have no relevance in reality, to increase their fame.
Keeping it closed is pretty much the default behavior.
One problem with opening the source code is then Boeing would have to dedicate a team of engineers to deal with every armchair crank claiming the software is going to crash the airplane.
And you are certain that all the "faulty sensors" that come up every now and then are actually mechanically faulty and not some bad actor sending garbage data apparently coming from that sensor to avionics?
Survivorship bias? Only having just learned that term, it sure seems to apply...of all the planes that have returned and landed safely, none seem to have been hacked; but there is no proof that a hack couldn't have already brought down a plane
Well, the complete set of modern crashes is relatively small, and the set of crashes with unknown causes is very small, so the number of crashes tied to hacks is probably even smaller.
Would publicly releasing the source code to avionics software for civilian aircraft pose an export control/ITAR issue? (I imagine there must be some degree of overlap between civilian avionics software and military avionics software, and the former may be a good starting point for building the later.)
Yes. I worked on some of what is being discussed (main thread), and there were many extra pieces of documentation that I had to prepare declaring ITAR concerns.
Anything that can create the proper incentives for keeping code at all levels secure enough. (Indeed, perhaps open source is the only realistic one, for newly written code at least).
Because yes, it seems the problem is with incentives. If attacks on such things happen seldom enough, that will never trickle down into incentives for managers at all levels to prioritize security high enough.
Then, somebody could argue that if it happens seldom enough, that is reason to not prioritize it that highly. But I don't think the actual risk translates into actual incentives for managers in a very linear, nor fact-based manner, especially when the number of occurances is very low, while still with catastrophic consequences.
They'd lose the ability to differentiate themselves (only on this specific criteria, but definitely affects overall competitiveness), thus having to compete more on price, which is great for customers but bad for companies.
I used to work for an airline as a software engineer (different from a company that makes airplanes, but you brought up airlines, so I think it's valid).
We definitely attempted to write the best code as we could given the circumstances, but we had issues doing so:
* airline margins are razor thin, so salaries are comparatively low, which means
* the best employees frequently left for other opportunities, causing
* management to institute an over-reliance on process and tech debt from poor engineers to build up like crazy, and then
* management's priority was always "keep the lights on" rather than repay any tech debt or start new ventures.
Eventually we were working on an unmaintainable codebase, spending way too long to ship each feature, and the situation was not improving.
It was not a wonderful environment to work in (hence my departure).
> * airline margins are razor thin, so salaries are comparatively low
Excluding executive pay, of course. Oh and excluding stock buybacks (which increases shareholder value, consequently greatly increasing the value of executive compensation).
Razor thin margins which result in $200 million (give or take) in quarterly profits are not exactly sad stories.
In summary, the non-executive employees are paid as little as possible to keep the company operating. And by operating, I mean that the bottom line/shareholder value is all that matters. Safety is really just a bottom line consideration. If an accident or two happens, and an eventual death payout is made, as long as the bottom line is not greatly affected, there will be no change in corporate behavior with respect to paying people properly and not cutting corners.
Arguments like this are absurdity taken to the max. Any executive's pay is a tiny tiny fraction compared to the combined salary of all the employees. Even if you set that salary to zero and divided it up all equally to all the employees (fractionally, so those who already make more get a bigger cut) the engineers would only get a couple dollars extra.
Secondly stock buybacks are similarly tiny compared to how much money a company actually has to give to its employees. Run the calculation some time. You need to be looking at revenues, not profits.
> I used to work for an airline as a software engineer (different from a company that makes airplanes, but you brought up airlines, so I think it's valid).
Airframe software is a totally different ballgame than airline operations management software.
When I worked on the 757 on flight critical systems (stab trim) the engineers I worked with took great pride in making the designs as good as possible. Nobody wanted to sign off on a design that they'd get a phone call on years later as being the cause of a pile of dead bodies.
I personally am proud that none of the stuff I worked on or any of the guys I knew worked on has been a factor in any incidents I've ever heard of.
We're talking about open sourcing, not ripping off copyrighted code.
Would depend on licensing but in any event, once you start showing how the sausage is made others can find inspiration to develop their own code, at which point you can start getting into a costly legal battle over whose idea it originally was, whether certain algorithms are protected, etc...
My point is simply that there's no upside for Boeing to open source their code.
You've got that backwards. There is no downside. Software does not an airplane make, even though there are now attempts to use software to fix airplane design issues.
You wrote 'Airbus' as though Boeings R&D in their software would somehow magically translate into an advantage for Airbus. But Airbus should also open source their code, and for exactly the same reason. In fact Airplane certification institutions such as the FAA and counterparts could easily mandate the open sourcing of every last bit of software to create a level playing field.
But Airbus has been doing the computer aided flight longer than Boeing so mabe it would be more push back from airbus to not give Boeing any free code?
The whole idea that you could not open source it with such restrictions that a competitor could not use the code is so strange. It is perfectly possible to open source code in such a way that you can't use it for free in your own commercial product without a license.
The whole idea that by open sourcing you will automatically turn their code into code that maximizes social prosperity (safer? cheaper planes? doubtful) is so strange.
Question: why would making the software open source make it any more safe? Do we really anticipate droves of engineers combing through Boeing's code helping them eliminate bugs (without some sort of bounty program)?
Unless you make MORE money by showing it than by keeping it private, why would they? There is every chance that some piece is useful for a competitor, so what you did was give your competitor some code they didn't have to write themselves.
The reason to have code open source would be to get public confidence perhaps, but I doubt that makes it a net positive in their eyes.
Personally I think if voting software is doing much more than basic addition, someone messed up.
If you write code that takes an input and increments a number, and you're worried that someone might exploit your code, someone else should have wrote the software.
Attach a printer that shows the voter a paper receipt before the receipt goes in a box, and it should be able to prevent against most any electronic attack.
Not that Microsoft Flight Sim or X-Plane aren't awesome and relatively high fidelity simulators, but I don't think that's a feasible QA loop, as a) they're not simulating all flight systems on a 1:1 basis, b) even if they did have a fully to spec simulation of the flight control software, it's still insufficient because the real thing runs on specialized hardware, which is not being simulated in flight sims of this tier, and most importantly c) the external environmental inputs are also being simulated so it would be of limited value.
"Miller and Valasek’s full arsenal includes functions that at lower speeds fully kill the engine, abruptly engage the brakes, or disable them altogether. The most disturbing maneuver came when they cut the Jeep's brakes, leaving me frantically pumping the pedal as the 2-ton SUV slid uncontrollably into a ditch. The researchers say they're working on perfecting their steering control—for now they can only hijack the wheel when the Jeep is in reverse. Their hack enables surveillance too: They can track a targeted Jeep's GPS coordinates, measure its speed, and even drop pins on a map to trace its route."[1]
The wheel control only working in reverse kind of makes sense. They're probably using some kind of self-park feature to control the wheel, and some engineer (sensibly) put in some kind of interlock to prevent the wheel from moving on its own when travelling at speed.
The wording of the article implies that these particular attacks only work when the car is travelling at low speed, but earlier in the article they did mention that they could (and did!) throw the transmission into neutral while the Jeep was driving on the highway. The driver was unable to recover without turning the car off and back on again.
In a followup a year later, they showed that they were able to do these attacks at any speed, including turning the steering wheel.[2]
Some of the hacks demonstrated disabling brakes. All that's needed is disabling them, and then when the driver (or rather, passenger in the drivers seat of the death express) is frantically stomping on the brake, selectively re-enable them on one side.
The article says “control” but doesn’t have a lot of specifics.
In any case, for bad dudes that are pursuing you when people aren’t around, being able to shut down your car is just as bad as being able to control it.
Considering that cars are becoming more drive-by-wire, it's only a matter of time before a hacker will be able to actually steer a car or activate (or prevent activation of!) the brakes.
it's only a matter of time before a hacker will be able to actually steer a car or activate (or prevent activation of!) the brakes.
A Tesla is the perfect example. The fact that it has self-driving---I mean, "assisted cruise control"---naturally means a computer can take over the controls entirely.
I forget which car it was, but I watched a car review video recently where the presenter fussed about taking the (otherwise great) car to a track. Some of the assists couldn't be disabled, so going around the track he would occasionally have to fight the steering wheel when it would try to turn him a certain way to stay within a "lane".
Connecting entertainment systems to flight control sounds very wrong. Connecting entertainment systems to flight management would be common; it should be one-way communication (entertainment can only read FMS data, not send any), for the purpose of driving the moving map displays for passengers.
The moving map could easily be fed from a separated consumer grade GPS. Same for all other metrics that the median passenger would care about (height, speed over ground), except for the ever-impressive outside temperature.
Most/all international flights I've been on are really terrible at providing an accurate flight plan or ETA to the passengers via the entertainment system. I would not be surprised if it is already a manual update done when convenient by a member of the flight crew.
They tend to use what's programmed in the FMS. Pilots will have the cleared route in there, even if they already know that they'll probably get several short cuts along the way.
I have never seen one that showed a flight plan. They only had a straight-line (well, a great circle route) to the destination. ETA was pretty clearly straight-line distance divided by (average?) speed.
Consumer grade GPS actually won't work at 30,000+ feet at speeds the plane would be flying. This is to prevent someone from using the GPS system to steer a ballistic missile.
My cell phone GPS works at cruise on an airplane. I can't attest to how location accurate it is but the altitude and speed are usually right on with what the infotainment system says.
The gps on my phone begs to differ. You can (sometimes) acquire enough gps locks to get a signal and subsequent gps data like speed, position, etc. Worth a shot trying because there's very few other times where you'd be able to see your gps sensor read 600mph. You're probably right though, it's just that the cutoff speed is higher than whatever a commercial jetliner speed is.
> it should be one-way communication (entertainment can only read FMS data, not send any), for the purpose of driving the moving map displays for passengers.
Would you agree that this logical boundary should be physically enforced? Such as an opto-isolator?
There should actually be three distinct "air-gaped" networks (or at least classifications of network).
Secure: The network that connects and controls the airplane. Absolutely only essential things allowed on, and if possible isolated using mathematically proven secure vlan/isolation techniques.
"Employee": pilots, crew, etc. This is more just a distinct network for corporate operations security.
"Customer": Still try to keep this one secure so that virus and just don't spread, but this is the 'DMZ' area.
Communication from the secure network should be outbound only, and might best be done over a fixed rate serial data connection of some sort.
The modern version might be to configure a point to point network link on a CDMA based system and just disconnect the secure side's RX path entirely. Then you just export the data blind via UDP with like 10X redundancy.
Most Boeing planes (though not the 787 discussed here) don't use fly-by-wire, so you could argue that the most essential control plane is perfectly isolated by virtue of not having any sort of network at all, just hydraulics into the cabin.
But wasn't the 737-max issue that a software system was forcing an elevator change to pitch the nose down? That sounds pretty fly-by-wire (even if hydraulics are involved).
You could say that the problem was lack of fly-by-wire - if 737-MAX had FBW, the necessary corrections would be bundled as part of flight envelope protection, and that tends to work pretty well and the required checks on FBW systems would catch stuff like "only one AoA sensor used".
Instead we got MCAS which was messing with autotrim signals and escaped scrutiny.
This sounds like a good idea, until you realize that the head unit is typically the thing in the car with the most computing power (think AI workloads as well as a GPU and multiple ARM cores) and the thing in the car with the network connection.
A trivial use case which requires write access to the CAN bus is the navigation system informing cruise control of an upcoming hill.
Equally trivial would be the seat position memory or profiles being applied through the main touchscreen.
(I work for a company that is developing infotainment systmes)
No fancy circuitry is needed. Just use one fiber. I suspect that 100BASE-T can pull this off with one pair and some hackery. RS232 should work work only one direction connected.
"security barrier" is a vague, meaningless phrase that they don't define. There shouldn't be any connection at all between flight systems and the in-flight entertainment.
The funny thing is that LEDs (granted, not all types of diodes) can be used to read data as well as transmit it. Videos of such interfaces can be found on YouTube. So, they are not as one-way as some folks may think.
Well it can not only be a LED that is supposed to be one-way, but the whole circuitry driving it. Likewise for the receptor. With some appropriate review of the physical design, you can have a reasonable expectation that the comm will be limited to one-way, with no way for arbitrary SW to modify the direction.
Article feels like it's focused on breaching the critical systems rather than just trying to kernel panic/overflow/etc them. My concern would be that if you could mess with the ODN such that it could crash or disrupt the data flow of either the other two systems I suppose you might brick the plane (because they refer to a "bridge")? Who know though, maybe there are redundant systems that can handle multiple critical failures, article is so light on details.
When you're trying to cut the BOM and weight, sharing a network can seem like a good idea.
Also industries have cultures, and dunning-kruger often applies outside their core domains.
For example: I did some work with Mercedes (no insult to them -- I've happily owned several of their cars). They were "real" engineers; the "schnook" of the door when it shut. The brake-by-wire folks modeled everything in Matlab, developed multiple implementations, But the entertainment system? It's not "real" engineering, so is typically subbed out to a canonical or cheapest bidder based on a powerpoint bullet list. The result: really really safe vehicle, but the user's actual experience (outside the driving) is not encouraging.
Same issue with a BMW I had: ECU always kept the pollution within spec; the car always started instantly when I turned the key no matter the temperature or environment, but the seats and windows would move randomly.
And we all suffer from this: when I worked on power plants I was (and remain) horrified by the shitshow that is SCADA and the HMI infrastructure many of the other "absurd" things I saw, well, some patient people would show me what an idiot I was and that there were very good reasons for doing things that looked to an outsider like headstands.
You want communications from the flight controls to the maintenance system. And you want communications from the entertainment system to the maintenance system, so technicians have a single list of everything that needs fixing.
It's hard to implement strict one-way communications -- usually you at least need some kind of ACK for reliable transmission.
Put all those together with a vulnerability in the middle, and you have an attack.
My thoughts when I realized I had a usb c phone and they had just implemented usb b; or when a device OR a whole row went out. An alcatel tablet and a mount is cheap; these are easily replaceable, etc.
Also, makes upgrades to new processors and better displays easier. Honestly, with an OLED screen, if there was a good mount on the back of the headrest, I'd rather watch movies on my screen.
From the article: "He was surprised to discover a fully unprotected server on Boeing's network, seemingly full of code designed to run on the company's giant 737 and 787 passenger jets, left publicly accessible and open to anyone who found it. So he downloaded everything he could see."
Is that even legal? Will he ever be allowed to cross the US border after admitting this?
Generally no. There is a difference between being unprotected and being open to the public. While in some cases a person can claim to not have known and proving mens rea for such a crime is much harder than if it was protected and the protection had to be bypassed, it isn't impossible.
Such laws are selectively enforced, but being this is Boeing, you can expect it will be enforced on their behalf if they have any desire for it to be (given the current PR issues and the impact this might have, they might let this one go, at least for the time being).
I'm surprised that IOActive backed him up on this, seems like it's treading a line that businesses normally are very conservative around. I very much expected him to be unaffiliated with a large organization after reading that.
Given in the US that corporations are allowed to contribute to political campaigns, and that one party has a very strong sense of "regulation is bad", there is intense pressure to deregulate as much as possible. And with the current administration and its choice to not staff many positions while cutting budgets (and while enacting executive orders to eliminate some regulations), the FAA is really unable to do anything.
That's not true, because they just did something: they said they were satisfied with Boeing here. They could have been honest and said "we're not competent to properly evaluate Boeing's design decisions".
But the mandate is that because of budget cuts, the FAA must depend on the manufacturers themselves to be the SMEs. Thus, they are asking Boeing to judge Boeing's own systems.
If B says, "It's all good", and B is the SME, then FAA must agree and pass.
Separation of IFE and avionics networks is something FAA actually chastised Boeing over during 787 design, and forced them to fully separate networks not VLAN crap.
It sounds like the networks, while not air gapped, are being separated by some "high" security design or device... that happened to withstand the attack (hence the testing on Boeing's part). Fair enough?
From my understading there are 'data diodes" used, and as such flows are only from IFE to maintenance, and from avionics to maintenance, but not the other way around.
AFDX (used for network in 787) uses unidirectional messages with no ACKs or anything so you can reliably make a data diode by just cutting cables right.
If I owned a 787, would I be likely to have the rights to lend it to security researchers to test the exploits, or would it be prohibited through a contract that Boeing requires customers to agree to?
Is there a reason that an individual would own a 787 for personal use— eg - is it a plane that people change the interior layout for use as a private jet, or are these planes all tied up in commercial use?
If I owned one, I would lend it to the researcher as I would want to know the flaws and risks more clearly.
I'd love to see the software licensing agreement that comes w/ an airliner. I can only imagine it contains the same kind of anti-reverse-engineering clauses that accompany virtually all commercial software.
I can't help but wonder what he planned to do with the plane, any insurance fraud scheme involving a passenger jet would probably invite interesting consequences.
> Is there a reason that an individual would own a 787 for personal use— eg - is it a plane that people change the interior layout for use as a private jet, or are these planes all tied up in commercial use?
There’s the BBJ (Boeing Business Jet) edition of both the 787-8 and 787-9. There’s over a dozen of them that’s been built and delivered.
Drake owns a 767 [1], and the founders of Google own one [2]. A couple members of the Saudi Arabian royal family are reported to own 777s [3]. It's not inconceivable that an individual could own a 787, though I can't find any reference to one yet.
I think, though, that the sort of person who owns one doesn't tend to want to tinker with it. You just hire someone to fly it, and know that it's always available. The difference between owning and renting, at that level, is mostly financials. Besides, even if you own the aircraft, it's likely you don't own the engines [4], and they're kind of an important component of the overall system.
Likewise, I don't know anyone who owns a car who has loaned it to a researcher to analyze it for design flaws. A couple people have done it [5], but for the vast majority of owners, you just use it normally, and if something breaks, you deal with the problem then. Airplanes are loaded with redundancies for critical systems so a lot of things have to go wrong for it to crash.
To be precise, in the culture itself it's usually called "modding" or "tuning"; but the newer parts of it, especially around modifying ECUs, does involve work closer to traditional computer hacking.
(I'm an automotive enthusiast myself, although I'm more into the "old school" non-computer-controlled stuff. Mostly because it's simpler and inherently resistant to remote hackers.)
Only if you want to be able to fly it. I suspect the FAA won't care if you bought a plane and chose to turn it into a house, take it apart and post videos on YouTube, or whatever else doesn't involve it flying.
I'm slightly astonished that the 3 networks mentioned aren't airgapped. I suppose the entertainment system needs to know where the plane is in order to display the flight map, but that should be provided by a dumb serial link with the RX wire cut.
Heck, just a simple standalone GPS receiver would be perfectly adequate. Doesn't need to be particularly fancy, either, because if it fails nothing bad happens but a non-functional map.
Is nobody going mention the irony that the source code was discovered because of a misconfiguration and Boeing is claiming that these vulnerabilities aren’t important because of their secure configuration?
But... it's a researcher making claims and assumptions but no working code... just because the target is Boeing doesn't mean the researcher might not be full of shit.
FAA actually gave a shit and forced Boeing to redesign when they tried to put through a shared network idea, since then the networks are, in fact, gapped.
I would assume in-flight entertainment is Level E and wasn't ever subjected to verification. And yeah that requires physical separation from higher-level systems. So... surprisingly I think I'm on Boeing's side here?
The subsystem that connects in-flight entertainment to anything on the flight deck (assuming an intended one-way, read-only connection) would probably be Level D. I would guess that that subsystem is in error here.
> To be clear, neither Savage nor Koscher believe that, based on Santamarta's findings alone, a hacker could cause any immediate danger to an aircraft or its passengers. "This is a long way from an imminent safety threat. Based on what they have now, I think you could let the IOActive guys run amok on a 787 and I'd still be comfortable flying on it," Savage says. "But Boeing has work to do."
Don't people here think that if boeing ever get over the current set of investigation without collapsing they're going to create the safest plane ever designed ? With the amount of scrutiny they're encountering at the moment i have the feeling every single dark corner is going to be under the spotlight..
Or is the reason too deep, the whole corporate structure too rotten at the core, that there's no hope ?
The larger problem that the FAA doesn't have the capability to actually check Boeing's safety claims, so are forced to take their word for it.
Funding hasn't dramatically changed for the FAA, so won't it just be more of the same? "We've fixed the AOA sensor problems (again). Trust us, our engineers have deemed it safe!"
Well, either that is true or the inherently flawed plane design that has been patched with software kills another 346 human beings. I would personally prefer to sit this experiment out for the rest of my life.
Unfortunately, airlines don't make it easy for me to "opt out" of flying on specific airplanes.
I would be interested to know how today's internet-connected cars are prevented from remote exploitation. I wish they at least separated steering wheel servo controls from infotainment and remote access but I have no way to verify it...
I think the obvious answer is "less safe at first, safer in the long term". Only that a plane is not something you want to ever be less safe, so it's a risk that might not be worth taking
Well geez, it's a good thing that there's no class of bugs in which a certain amount of data, maybe more than the receiver was expecting, or terminated in an odd way, overwhelms the receiver in such a way as to cause the data to then be interpreted as commands which are run in place of the receiver's code...