Every PCI device in your computer could be hosting a backdoor: "The PCI specification defines an 'expansion ROM' mechanism whereby a PCI card can include a bit of code for the BIOS to execute during the boot procedure. This is intended to give the hardware a chance to initialize itself, but we can also use it for our own purposes."
The idea is to hook the interrupt handler that normally invokes BIOS code for handling hard drive I/O at boot time so it invokes instead code in the PCI device that applies a backdoor patch to the Linux kernel when it is read off the disk. The author explains how to patch the Linux kernel by overwriting an obscure, seldom-used error message string, and provides sample code for a simple kernel patch that will listen for IP packets with an unused protocol number and run any payload delivered via them on a Linux shell with root privileges.
It's scary how straightforward this looks.
Whenever I read things like this, I'm reminded that every day we're trusting our data and privacy to the people who design, build, and distribute all the hardware (and software) we use but didn't create ourselves from scratch.
To paraphrase Ken Thompson, we have little choice but to trust trust.[1]
This is a known issue. And it has been exploited for kinda sorta these reasons. One was an Intel product where the boot roms hosted their own system management product. Basically you could do anything though that code including reflashing the BIOS on the mother board, all by plugging in a simple Intel NIC. Later it became this : http://www.intel.com/content/www/us/en/server-management/int...
It gets worse if you can load custom firmware on a disk drive. (which anyone can do). Then you load your firmware which says its the firmware that the drive had on it when it was shipped, except it has some features for diddling bits in the read cache if certain conditions are met.
There's really no way for open hardware to be better. Unless you have your own fab, the only way to ensure that back doors haven't been added at the hardware level is to decap and manually validate every chip. That's pretty clearly not viable.
Don't mix up "better" with "perfect". That's the mistake made by people who post a link to the trusting-trust: Trust isn't binary. "Perfect or give up" is a false dichotomy.
I sometimes say "trust no one", which seems to support that false dichotomy, but I mean to say "Reduce and distribute trust".
Open source code and hardware doesn't remove trust. It reduces the amount of trust required and distributes it among more parties. It makes betrayal harder, more expensive, more temporary, and less destructive. That's not perfect, but that's much, much better.
There's a world of difference between closed source software from MegaCorporatism Inc and open source -- even if an evil genius is still technically capable of sneaking something into a compiler or chip.
Well, in this case it is a binary thing. When you talk about this in an applied sense you're right, then it isn't a binary thing but when you talk about feasibility then it is black-or-white. You're either sure or it might as well be compromised.
Not necessarily. As the author of the article said, it's easier to rely on software bugs than to ship a crocked piece of hardware like this. If you manage to backdoor a mass-production piece of hardware (let's say this is a wifi card or something else similar), you're just one odd error away from someone curious finding out what's happening, raising the alarm, and the whole operation comes crashing down.
From a logistics standpoint, it's easier to break software.
I wonder if the trusting trust "two compiler workaround" could work here? Would it be possible to build a duplicated system with two (or more) of every component, then have them compare themselves to each other. Could you build similar-but-not-identical hardware, sourcing components like pci cards and disks from different vendors, to minimise the chance you've got backdoors in all sets of components? (I'm now thinking that space-shuttle-style multiple computer setups might be a starting point worth investigating.)
> the only way to ensure that back doors haven't been added at the hardware level is to decap and manually validate every chip.
Not good enough. It's possible to insert trojans by changing the dopants on the silicon [1], which can't be detected with the decap-and-scan method. From the paper:
> our dopant Trojans are immune to optical inspection, one of the most important Trojan detection mechanism.
You really need to trust the people making your hardware, full stop. There are too many holes with even a trust-but-verify approach.
This is really scary. The fact that the collected data must be sent either through the wire or through the air at some point would make the trojan detectable - perhaps when it's a too late, but still detectable - wouldn't it?
A related history was posted on reddit a few days ago, about a user identifying his HP notebook sending away his built-in microphone data [0].
What's your view on FPGA's? Could you trust FPGA based computers on the premise that an FPGA fab doesn't know what configuration the chip will eventually be used in? (of course, then you'd still have to trust the FPGA toolchain but at least the output of that toolchain could in theory be validated to correspond to the input).
This could be an interesting approach. I imagine it would be very difficult to compromise an FPGA processor implementation in hardware, for the same reason that it's tough to have a processor mess with arbitrary software. The analysis tools needed simply aren't possible to implement at the level you're working at.
You could probably insert a hardware trojan that scans for specific FPGA elements and backdoors them. But there's a potential that an unrelated recompile could alter your signature. An alert adversary would be an even bigger problem. You're trying to hit a moving target from a stationary platform.
Unfortunately, commercial FPGAs today are notoriously proprietary. So while this idea may have theoretical merit, it is not currently an improvement in practice.
> still needs manufacturer supplied toolchain for the rest of the steps
This still kills the desired properties of the system. You need open source tools end-to-end, all the way down to the place-and-route system. A backdoor can be inserted at any point otherwise.
Stepping away from the current state of things, competition in the FPGA space still relies heavily on patents and trade secrets. Until that changes, the proposed approach isn't viable.
I wonder if Xilinx or Altera will ever consider this market space interesting enough to pursue. Unfortunately, my gut says no.
An FPGA fab doesn't know what the eventual configuration will be, but that doesn't stop them (or the foundry) from inserting hardware backdoors. One example is the Actel ProASIC¹.
Vulnerabilities like this make it difficult, if not impossible, to trust commercial hardware. The DoD started the Trusted Foundry Program² precisely for this reason.
Couldn't get the performance out of current technology at a reasonable price point, but perhaps in the future we'll have open-source processors running open source peripherals and controllers on FPGAs (blank configuration until boot-up).
For certain applications, that is not unreasonable. I am sure there are more than a few countries that would be interested in fabbing trustable hardware for their own internal use.
Do you trust all the people that work on your factory, and all the people that touch the design? By the way, do you trust the manager of the factory? And his manager (and so on)?
These are issues that countries need to grapple with regardless. If they don't have these personnel issues reasonably figured out, then backdoors in their hardware are really sort of a moot issue.
These are much easier problems to mitigate in practice than "We just bought this hardware from the US, who is not particularly keen on us, and we have absolutely not fucking idea what is in it."
Another interesting way to look at it is to assume all or some large percentage of the computers in your network are compromised in some unknowable way with hardware or software backdoors/keyloggers.
What measures could you employ in the design of your network (both physical/topological and logical) to minimize the effectiveness of these backdoors?
I think it would look similar to how tor diffuses things around the network such that you would need to control a significant number of nodes before being able to discover the identity of a user or the contents of the data bound for that user.
The most obvious answer would be air gaps for absolutely critical systems.
For systems where this is not a viable option, how could you best detect/prevent unauthorized attempts from compromised nodes on your network to 'phone home'?
My first thought would be to establish multiple redundant gateways on the network with completely different software/hardware stacks which would all perform the exact same job of vetting packets coming across the network. Along with another set of computers which also have unique stacks which verify that the gateways are all performing identically and no extra sneaky packets are being sent out thorough one gateway and not others.
Mostly waiting to be researched instead of captured. We're only just starting to figure out how to compile software such that the binary can be audited and proved to match the source. How do you fab hardware so that the IC can be proved to match the blueprint?
The idea is to hook the interrupt handler that normally invokes BIOS code for handling hard drive I/O at boot time so it invokes instead code in the PCI device that applies a backdoor patch to the Linux kernel when it is read off the disk. The author explains how to patch the Linux kernel by overwriting an obscure, seldom-used error message string, and provides sample code for a simple kernel patch that will listen for IP packets with an unused protocol number and run any payload delivered via them on a Linux shell with root privileges.
It's scary how straightforward this looks.
Whenever I read things like this, I'm reminded that every day we're trusting our data and privacy to the people who design, build, and distribute all the hardware (and software) we use but didn't create ourselves from scratch.
To paraphrase Ken Thompson, we have little choice but to trust trust.[1]
--
[1] http://cm.bell-labs.com/who/ken/trust.html