Every PCI device in your computer could be hosting a backdoor: "The PCI specification defines an 'expansion ROM' mechanism whereby a PCI card can include a bit of code for the BIOS to execute during the boot procedure. This is intended to give the hardware a chance to initialize itself, but we can also use it for our own purposes."
The idea is to hook the interrupt handler that normally invokes BIOS code for handling hard drive I/O at boot time so it invokes instead code in the PCI device that applies a backdoor patch to the Linux kernel when it is read off the disk. The author explains how to patch the Linux kernel by overwriting an obscure, seldom-used error message string, and provides sample code for a simple kernel patch that will listen for IP packets with an unused protocol number and run any payload delivered via them on a Linux shell with root privileges.
It's scary how straightforward this looks.
Whenever I read things like this, I'm reminded that every day we're trusting our data and privacy to the people who design, build, and distribute all the hardware (and software) we use but didn't create ourselves from scratch.
To paraphrase Ken Thompson, we have little choice but to trust trust.[1]
This is a known issue. And it has been exploited for kinda sorta these reasons. One was an Intel product where the boot roms hosted their own system management product. Basically you could do anything though that code including reflashing the BIOS on the mother board, all by plugging in a simple Intel NIC. Later it became this : http://www.intel.com/content/www/us/en/server-management/int...
It gets worse if you can load custom firmware on a disk drive. (which anyone can do). Then you load your firmware which says its the firmware that the drive had on it when it was shipped, except it has some features for diddling bits in the read cache if certain conditions are met.
There's really no way for open hardware to be better. Unless you have your own fab, the only way to ensure that back doors haven't been added at the hardware level is to decap and manually validate every chip. That's pretty clearly not viable.
Don't mix up "better" with "perfect". That's the mistake made by people who post a link to the trusting-trust: Trust isn't binary. "Perfect or give up" is a false dichotomy.
I sometimes say "trust no one", which seems to support that false dichotomy, but I mean to say "Reduce and distribute trust".
Open source code and hardware doesn't remove trust. It reduces the amount of trust required and distributes it among more parties. It makes betrayal harder, more expensive, more temporary, and less destructive. That's not perfect, but that's much, much better.
There's a world of difference between closed source software from MegaCorporatism Inc and open source -- even if an evil genius is still technically capable of sneaking something into a compiler or chip.
Well, in this case it is a binary thing. When you talk about this in an applied sense you're right, then it isn't a binary thing but when you talk about feasibility then it is black-or-white. You're either sure or it might as well be compromised.
Not necessarily. As the author of the article said, it's easier to rely on software bugs than to ship a crocked piece of hardware like this. If you manage to backdoor a mass-production piece of hardware (let's say this is a wifi card or something else similar), you're just one odd error away from someone curious finding out what's happening, raising the alarm, and the whole operation comes crashing down.
From a logistics standpoint, it's easier to break software.
I wonder if the trusting trust "two compiler workaround" could work here? Would it be possible to build a duplicated system with two (or more) of every component, then have them compare themselves to each other. Could you build similar-but-not-identical hardware, sourcing components like pci cards and disks from different vendors, to minimise the chance you've got backdoors in all sets of components? (I'm now thinking that space-shuttle-style multiple computer setups might be a starting point worth investigating.)
> the only way to ensure that back doors haven't been added at the hardware level is to decap and manually validate every chip.
Not good enough. It's possible to insert trojans by changing the dopants on the silicon [1], which can't be detected with the decap-and-scan method. From the paper:
> our dopant Trojans are immune to optical inspection, one of the most important Trojan detection mechanism.
You really need to trust the people making your hardware, full stop. There are too many holes with even a trust-but-verify approach.
This is really scary. The fact that the collected data must be sent either through the wire or through the air at some point would make the trojan detectable - perhaps when it's a too late, but still detectable - wouldn't it?
A related history was posted on reddit a few days ago, about a user identifying his HP notebook sending away his built-in microphone data [0].
What's your view on FPGA's? Could you trust FPGA based computers on the premise that an FPGA fab doesn't know what configuration the chip will eventually be used in? (of course, then you'd still have to trust the FPGA toolchain but at least the output of that toolchain could in theory be validated to correspond to the input).
This could be an interesting approach. I imagine it would be very difficult to compromise an FPGA processor implementation in hardware, for the same reason that it's tough to have a processor mess with arbitrary software. The analysis tools needed simply aren't possible to implement at the level you're working at.
You could probably insert a hardware trojan that scans for specific FPGA elements and backdoors them. But there's a potential that an unrelated recompile could alter your signature. An alert adversary would be an even bigger problem. You're trying to hit a moving target from a stationary platform.
Unfortunately, commercial FPGAs today are notoriously proprietary. So while this idea may have theoretical merit, it is not currently an improvement in practice.
> still needs manufacturer supplied toolchain for the rest of the steps
This still kills the desired properties of the system. You need open source tools end-to-end, all the way down to the place-and-route system. A backdoor can be inserted at any point otherwise.
Stepping away from the current state of things, competition in the FPGA space still relies heavily on patents and trade secrets. Until that changes, the proposed approach isn't viable.
I wonder if Xilinx or Altera will ever consider this market space interesting enough to pursue. Unfortunately, my gut says no.
An FPGA fab doesn't know what the eventual configuration will be, but that doesn't stop them (or the foundry) from inserting hardware backdoors. One example is the Actel ProASIC¹.
Vulnerabilities like this make it difficult, if not impossible, to trust commercial hardware. The DoD started the Trusted Foundry Program² precisely for this reason.
Couldn't get the performance out of current technology at a reasonable price point, but perhaps in the future we'll have open-source processors running open source peripherals and controllers on FPGAs (blank configuration until boot-up).
For certain applications, that is not unreasonable. I am sure there are more than a few countries that would be interested in fabbing trustable hardware for their own internal use.
Do you trust all the people that work on your factory, and all the people that touch the design? By the way, do you trust the manager of the factory? And his manager (and so on)?
These are issues that countries need to grapple with regardless. If they don't have these personnel issues reasonably figured out, then backdoors in their hardware are really sort of a moot issue.
These are much easier problems to mitigate in practice than "We just bought this hardware from the US, who is not particularly keen on us, and we have absolutely not fucking idea what is in it."
Another interesting way to look at it is to assume all or some large percentage of the computers in your network are compromised in some unknowable way with hardware or software backdoors/keyloggers.
What measures could you employ in the design of your network (both physical/topological and logical) to minimize the effectiveness of these backdoors?
I think it would look similar to how tor diffuses things around the network such that you would need to control a significant number of nodes before being able to discover the identity of a user or the contents of the data bound for that user.
The most obvious answer would be air gaps for absolutely critical systems.
For systems where this is not a viable option, how could you best detect/prevent unauthorized attempts from compromised nodes on your network to 'phone home'?
My first thought would be to establish multiple redundant gateways on the network with completely different software/hardware stacks which would all perform the exact same job of vetting packets coming across the network. Along with another set of computers which also have unique stacks which verify that the gateways are all performing identically and no extra sneaky packets are being sent out thorough one gateway and not others.
Mostly waiting to be researched instead of captured. We're only just starting to figure out how to compile software such that the binary can be audited and proved to match the source. How do you fab hardware so that the IC can be proved to match the blueprint?
It's funny. Folks used to laugh at me for being too paranoid to allow loadable kernel modules on my firewalls (actually I preferred to host the kernels on read only media and have no lkm support all with custom compiled kernels). What seemed horribly paranoid to so many people seems so reasonable today.
This being said, one of the key issues is that I could expect that a compromised motherboard and controller (for example ethernet controller) should be able to make such changes in ram after the boot process has completed, with or without the help of BIOS. The level of paranoia certainly needs to be stepped up a bit.
The thing is that distros tend to use loadable modules, and if you want to avoid that you need to compile your own kernel (as you seem to be doing), and at least I am a lot happier getting security updates from my distro than being on the hook for recompiling them myself in a timely fashion.
You can get most of the security benefits of avoiding loadable modules by setting the sysctl kernel.modprobe (i.e., /proc/sys/kernel/modprobe) to "/bin/false" instead of "/sbin/modprobe", late in the boot process. So everything needed to initialize your hardware is loaded, but anything that an unprivileged user attempts to autoload (like a buggy kernel module for a socket family you've never heard of) fails.
I have a config like this on all the security-sensitive servers I run, which tend to have a few thousand unprivileged users. It's actually a shell script that logs the attempt and then returns false, instead of silently returning false, but "/bin/false" is good enough.
But do note that this is a bit orthogonal to the issue mentioned in the article: the proposed attack involves the victim machine having the kernel and modules intact on disk, but device firmware compromised so that it changes the kernel after it's been loaded into memory.
Building a kernel isn't difficult (and used to be pretty much required). Build support in many distros is quite robust.
The real challenge is that once you've compiled a kernel, that's all you've got in terms of support. If you need to add filesystem support, a networking capability, additional driver support, etc., you've got to configure and build a new kernel, and test it, which is distinctly less convenient than autoloading an existing module (or even one you've newly compiled in many cases, on a running kernel).
I seem to recall a kernel option or syscontrol, possibly from OpenBSD/FreeBSD, which prevents loading of additional modules once it's been set. This allows you to boot and load modules, but then no more. If your boot media are read-only, this gives a fairly high level of confidence.
For a firewall, the hardware is pretty constant. You don't generally need new filesystems. It's just a matter of frequently downloading the source, compiling, and rebooting (with the option to boot back if something goes wrong). If you need to test, it isn't that hard to set up a testing environments with a duplicate set of hardware if you need high availability. For lower availability needs, testing in place is sufficient. For may consulting business, my availability needs were such that testing in place with the option to roll back was sufficient.
" … I am a lot happier getting security updates from my distro than being on the hook for recompiling them myself in a timely fashion."
It seems to me that security updates for the kernel are only a tiny part of what a distro releases as security updates. For something like a dedicated firewall – if I've spent the time to compile my own kernel, I'm unlikely to be running all the userspace software that a distro needs to keep updated/secured. My firewall box (probably) doesn't need to update to fix newly discovered flaws in MySQL or PHP or whatever.
Right. Additionally everything was configured, where a network service was required (for example ssh) to be only listening on the internal interface. The external network exposure was tiny.
Keep in mind – the people I might have called "overly paranoid" in the pre-Snowden internet era would probably advise you to secure your infrastructure just as much from internal attacks as external ones – anybody targeting you specifically (as opposed to a fly-by botnet powered net-wide vulnerability exploit) is likely to get a foothold on a less-protected machine inside your firewall via non-direct means (spear-phishing an admin's laptop or NSL-ing your OS or router vendor).
That makes it far more "interesting" working out appropriate protection against high level attacks – fortunately for me it's purely a hypothetical defense, my personal (and professional) stance is that if law enforcement or state level espionage targets me, I'm hosed and will happily turn over passphrases and encryption keys to anyone with a badge (and hopefully a court order), and I assume any of the people who I rely on for security (from my ISP to my VPS provider, my SaaS vendors, my OS vendor, through to my hardware suppliers) will sell me out pretty much instantly if the NSA(/GHCQ/ASIO) ask them to. I can _probably_ trust a RaspberryPi that's never been network connected – but it'd be foolish to assume anything else digital I own isn't trivially vulnerable to the NSA if they cared enough about it.
My internal exposure was small because there weren't many people in the office (usually 2-3 at most). In a larger environment I would probably filter to specific admin access points.
I think in this "post-Snowden" era, you now need to consider not just the people on the internal network, but whether any of the gear on that network might be betraying you to the NSA.
I'm sitting here in my loungeroom looking at my printer, the PlayStation, the Media Server, a bunch of laptops, a few phones, a couple of iPads, a Mac Mini, the linux box, a RaspberryPi, the cheapo chinese adsl/wifi box, and the old NetGear ethernet switch – and wondering if any of them are taking advantage of the privileged access my home IP address has on a bunch of other internet connected networks?
You have my vote. Static kernels are the way to go for production systems. This also gets rid of some nasty dance-steps in case your boot device needs a driver that loads as a module. Ram disks to get around such limitations are pretty easily subverted and hardly anybody ever looks at what is actually going on in there which makes them an excellent place to pull tricks.
You don't even need this. Patrick Stewin and Iurii Bystrov[1] released an excellent paper earlier this year (and Patrick presented his research at 44CON[2] last week) on abusing Intel's iAMT functionality to create an in-firmware keylogger and undetectable (from the host's perspective) exfiltration mechanism for streaming out keystrokes and receiving malware updates.
Doesn't this basically make it impossible to trust any hosting company that provides the equipment for you? The NSA could have every server at Rackspace backdoored.
The more I think about this, I am convinced that the only way to avoid private data snooping is to pollute / poison the data.
If all channels of communication are flooded with poisoned messages, it wouldn't matter who / what snoops the data. The poisoning needs to be obvious so that the intended recipient can immediately ignore it. At the same time, it needs to be ubiquitous so that machines can't filter it.
Yeah, human recipients and snoopers would both be able to filter out the poison. But it would make automated collection difficult.
Another idea could be a reverse captcha. All messages by default could be coded as images. (Hey in fact I think this is a brilliant idea if I say so myself!) Combine that with poisoning, and we can be safe from automated collection for atleast a decade. Combine that with encryption and other security measures and that would be awesome.
In fact I am on a idea spree. What if messages were encoded with a captcha? Enter the captcha to decode the message. This encoding is purely to eliminate automated collectors and indexers.
Spam is easier to tackle since a spammer can be tainted for ever. But a poisoned feed still needs to be processed every single time
To backdoor a server of a large hosting provider, a NSL would be sufficient, so I doubt that the NSA would go to such length just to break into a single server.
This is worse on a larger scale: The same code could be on each hard drive sold in the US. You can't even trust your own computer any more.
A hard drive firmware implementation would be a different approach to that discussed via PCI, though not impossible it would probably be more difficult to execute.
Given the heightened awareness around all of the recent news, you can safely assume people are actively looking for existing backdoors in firmware. Theorizing about a single provider is a bit unfair, especially given there are other services who are actively engaging in business with intelligence agencies. It would be much easier and more reliable to just snapshot your instance and do work on it offline instead of trying to do a remote exploit on the hardware running it.
It seem to be very specific for an OS and maybe even kernel version, how much generic it could be? There aren't kernel security modules that could detect if something had been overwritten? The paper was written in 2010, maybe was something done in the kernel development to be able to check for that.
What about detection of that kind of backdoors is being present in your current hardware? It is possible if a backdoor is running? Or, i.e. loading some not very used OS for that kind of validation (i.e. some of the BSDs or a kernel with modules disabled) that could avoid to run the backdoor and being able to do that detection.
Wouldn't your stateful packet inspection firewall notice the remote attempts to reach the backdoor? Assuming its running openbsd and not also full of hardware backdoors.
This is one reason I never buy hardware p2p off bitcoin trading sites and forums, since it would seem logical to target those buyers who may have stuffed wallets to clean out
I'm sure the use of a funny protocol packet is just to keep the example simple, making an evil module that opens a tunnel that a DPI firewall wouldn't notice is routine gruntwork to the kind of attacker that can manage to sneak an evil PCI device onto your bus.
It wouldn't be that hard to have the backdoor reach out to a remote host instead. Once you have any amount of control at the internal level, firewalls filtering inbound packets are basically worthless.
The idea is to hook the interrupt handler that normally invokes BIOS code for handling hard drive I/O at boot time so it invokes instead code in the PCI device that applies a backdoor patch to the Linux kernel when it is read off the disk. The author explains how to patch the Linux kernel by overwriting an obscure, seldom-used error message string, and provides sample code for a simple kernel patch that will listen for IP packets with an unused protocol number and run any payload delivered via them on a Linux shell with root privileges.
It's scary how straightforward this looks.
Whenever I read things like this, I'm reminded that every day we're trusting our data and privacy to the people who design, build, and distribute all the hardware (and software) we use but didn't create ourselves from scratch.
To paraphrase Ken Thompson, we have little choice but to trust trust.[1]
--
[1] http://cm.bell-labs.com/who/ken/trust.html