Hacker News new | past | comments | ask | show | jobs | submit login
Chrome lets hackers phish even 'Unphishable' Yubikey users (wired.com)
141 points by asm on March 2, 2018 | hide | past | favorite | 109 comments



This is the attack:

> If a victim logs into a fake Google site, the phishing site passes on their username and password to the real Google login page. Then the spoofed site passes back Google's request for the user's U2F token and collects the Yubikey's unique answer, all via WebUSB. When that answer is then presented to the real Google site, the attackers gain access to the victim's account.

So basically they are somehow able to trick the yubikey neo into accepting a challenge from a different domain, by using the webusb API.

Reading further:

> The technique would only work with U2F keys that offer protocols for connecting to a browser other than the usual way U2F tokens communicate with a computer, known as the Human Interface Device or HID, which isn't vulnerable to the attack. The Yubikey Neo, for instance, can also connect via the CCID interface used by smartcard readers

> An assumption was made by Chrome that all U2F is HID, which doesn't hold for the Neo, whereas Yubico made an assumption that USB will never be accessible by web pages directly

So:

- Don't use a Yubikey Neo anymore

- Don't use Chrome

- Don't use U2F because FireFox doesn't support it

- Never use your yubikey because hardly anything supports it

Sigh


> - Don't use U2F because FireFox doesn't support it

It does! Open about:config and switch security.webauth.u2f to true. It'll Just Work.

I've in the recent past modified a barebones Perl webapp to try and understand U2F better, see https://u2fdemo.darkpan.com/

I've been able to log in / use U2F from:

* FF on Windows and OSX

* Chrome on Windows, OSX

* Chrome on Android using either a OTG cable for a U2F USB key, a Bluetooth U2F key, and a NFC U2F key (works if you install Google Authenticator)

* Unfortunately, not FF on Android as I can't find how to enable U2F there yet :/


It works, but only partially, and is still very very broken, which is why it is disabled in the first place. See also this bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1065729


Nonsense. It works fine on Github, Fastmail, Gandi... it doesn't work on Google because Google uses a different spec. That bug is about making Firefox compatible with the variation that Chrome/Google uses.


That bug mentions that Facebook is also broken.

I am kind of surprised that the sites you mention can implement the spec correctly but Facebook and Google can't.


Google is notorious for not following spec.

In this case, though, what happened is that Google implemented a different, earlier version of the spec than most everyone else. Mozilla is busy implementing that spec as well to bridge the gap.


Google's CardDAV server isn't standards compliant (2014) https://news.ycombinator.com/item?id=16412730

I'm not surprised.


Firefox barely supports U2F. It works on Github and Dropbox, but doesn't work on sites like Vanguard and Google. Every time I do a Firefox update I do a search of the bug listing and they seem to have an incomplete implementation of the spec. They're kicking the can until they fully implement the WebAuth API and jump over dealing with whatever earlier spec they were targeting.

Speaking of which, why does Vanguard force you to still have SMS two factor available even when you add a U2F device...


> Firefox barely supports U2F.

Haha it's funny you put it this way because actually it's Firefox that implements FIDO U2F standard correctly and Chrome is not. Chrome uses low level API to communicate with their built in extension and the high level shim that they provide is not 100% spec compliant.

Google did not bother to use the U2F correctly on their accounts site, Github for example did it correctly and their 2FA works on any browser (that is FF and Chrome).


The usual rationale from companies forcing SMS two factor is that you need to have a convenient account-recovery mechanism before you enable something strict and lock yourself out. They don't want the support cost of dealing with these lockouts.

Unfortunately, these same companies often then claim that there is no harm in SMS two factor since "clearly it is stronger than one factor". But they are blind to their own systematic design flaw which is that the same SMS setting to enable two factor also usually enables one-factor password-recovery via this supposedly trusted phone.

Given what we know about SMS security, it is pretty obvious that one-factor SMS is weaker than one-factor good strong password. And if the good strong password can be merrily reset by whomever hijacks your phone, you have really just decreased your security posture while performing this whole security theater around two-factor and hardware tokens.


SMS is already 2fa. You need the sim card and the pin code. Hence a hijacked phone could be seen as stronger than a 1fa password.


Unfortunately the network security is kind of a joke so an attacker can intercept your messages if he is near you.

Not to mention that traffic inside the network is not encrypted so a lot of parties have legitimate access to the messages anyway.

I understand your point but SMS should not be used as the only factor for authentication.


Correct me if I am wrong, but these SMS-based login setups are only sending a message to your phone number. It's about as secure as sending an email to your email address. There is no end-to-end security between the original sender and the subscriber's phone and SIM card to ensure that the message only gets to the correct recipient.

You only need to hijack the victim's phone number so that messages are sent elsewhere. This can be done by technical or social hacks such as porting the subscriber's number to a new provider or pretending a phone was lost and having the phone company register a replacement SIM. There is no need to physically intercept the victim's phone, so it is not in fact a second factor.


Google is the one that isn't compliant.

Firefox is compliant.


Does it actually not work on Vanguard... or is it that Vanguard does user-agent sniffing and says Firefox is not compatible?


> It does! Open about:config and switch security.webauth.u2f to true. It'll Just Work.

Unfortunately for a large number of users that effectively means it doesn't work.


> Chrome on Android using either a OTG cable for a U2F USB key

Which key did you use? I tried Yubikey 4 (via OTG cable) and 4C (directly) and the U2F flow with Authenticator did not work (just like the key would not be recognized for U2F).


ok scratch that, use firefox then

but still hardly anything supports U2f :-(


Not everything supports U2F, but plenty of things do, many of them high value services:

http://www.dongleauth.info/


The existence of WebUSB is awful. Why anyone ever thought it was a good idea to let JavaScript touch your USB devices is beyond me.


It's simple: allowing sandboxed code to request limited access to a USB device is more secure than having users install native, unsandboxed code with access to everything on their PC.


Assuming the sandbox works. If the sandbox is porous, the attack surface balloons from apps I choose to install to every link I click.


Not every link you click. Only sites that you grant access to the necessary attack surface. The Web USB API can't be attacked by sites that you haven't granted access to it.


What if that privileged website has XSS vulnerability?


Then the attacker gets access to that USB device. (And only that USB device.)

What if your unsandboxed native USB utility has an RCE vulnerability?



As opposed to native desktop apps, which get all the same permissions by default that a web app requires a zero-day sandbox escape vulnerability to achieve?


Native desktop apps are limited in number are nowhere near the dumpster fire the web is. My desktop isn't routinely downloading and executing payloads from the web. They're clearly different.


This is about the Web USB API, not the entire web in general. Are you routinely granting web pages access to your USB devices? That's not a permission that web apps get by default (unlike with native desktop apps btw).

It comes down to this: if you ever found yourself in a situation where you needed to connect a USB device to a remote service, would you prefer to download that service's unsandboxed native code to your PC and execute it? Or execute some JS in the browser sandbox and grant it limited access to that one specific device?


There are operating systems which don't by default give every application running as every user account access to every storage device.


>if you ever found yourself in a situation where you needed to connect a USB device to a remote service

I have never found myself in that situation. That sounds like a really silly idea.


Then click "deny", or (in the case of a native app) refuse to install the executable. Either way you're safe.

For those that _do_ require [such use cases][1] though; they can now do so without needing to expose their system to an unsandboxed native app.

[1]: https://wicg.github.io/webusb/#motivating-applications


I know to do that. How about my grandma, who just clicks whatever button looks like it'll make the message go away sooner?

The web is a disaster and WebUSB is a prime piece of evidence supporting this.


Would that include the "Run" button on a downloaded executable?

While obviously we want to do as much as we can to discourage users from shooting themselves in the foot, there are limits. At some point, eventually you _do_ have to trust that the user knows what he's doing.

Giving users a choice on when to allow a page access to one specific USB device is not a "disaster".


Just because you've never dreamed up a situation where it might be useful doesn't mean that they don't exist.


Clarification: I've never wanted to do that in a web browser.


Attaching a bootable USB drive to a HTML5 based KVM? Sure it could be accomplished in other ways buy why not this way?


Why an HTML5 based KVM? A desktop app would be great for that!


Strongly disagree here.

I think WebUSB will enable a lot of cool things, though it will definitely be hard to sandbox.

I want to program an Arduino from a web IDE. I want to control a 3d printer or pen plotter from a web application. I want to store things on a flash drive on my iPhone using a web-based file explorer? This last one sounds strange.

On a tangent, I see application runtimes moving into the browser by default with very few performance-critical applications remaining outside that stack. Photoshop and AAA games might be exceptions. Services like databases and web browsers would not have a need to be in Chrome.

(don't hurt me, I know I'm strange.. ~)


Just because its possible, doesn't mean we should!

I see nearly no good reason for any of those uses to be web-based, and certainly not with hardware control! If you want to store files on your flash drive, the browser should handle that, not give out direct usb access...

Hearing this exists was a shock, like when the first Android 'Instant' load app showed up on my phone (apps that you don't install, but run themselves if you goto a website or a physical store, and without asking you)


Considering your use cases, why not simply require an extra step in the browser to enable it instead of turning it on for all users?

EG, take a few seconds to turn on a flag in configuration, or add a plugin.


There is a permission prompt that appears, websites can't use the WebUSB functionality until you allow them too.


Why do you want to use a Web IDE? I am not trying to cause tension, I am just genuinely curious.


For me: teaching!

Getting 400 students up and running with python is painful alone - without USB. For projects, some small fraction of those used USB, and we again had a pretty good chunk of time spent getting them all working.


I would think that getting Python up and running is a valuable lesson.


I worked for an online retailer a long long time ago and this would have been nice to have at our packstations. The tech at each station consisted of a Linux desktop, a barcode scanner, a printer, and a scale. The commerce platform was web based and displayed the current weight on the scale on the pack page.

Originally the scale was connected via serial port to the desktop. A little Perl service we called "the scale daemon" exposed the current weight on some localhost port. The backend would constantly poll that service and the web browser would poll the backend to keep the value on the page up to date. It was a nifty solution albeit a bit clumsy and it meant running an extra piece of software on 50+ machines in a warehouse.

At some point our scale vendor stopped selling scales with serial connections or maybe it was that the PC vendor stopped including serial ports. It was 10+ years ago and I can't recall exactly. What I do remember is that the guy that wrote the scale daemon had moved on and I had to figure out how to make it work with USB. It was fun and probably not as hard as it seemed at the time, but wow, being able to get at that scale from inside the browser would have massively simplified things.


[flagged]


This comment added nothing, in the future please consider offering substantive criticisms.


More importantly: "the phishing site would also have to ask the user's permission to enable WebUSB access to their Yubikey, and then tap the physical button on the key."

So don't do that. It would be nice to know exactly what this dialog looks like, but it seems low risk?


Convincing users to grant access to a USB device when they're attempting to log in to a service using said USB device sounds like something that would work more often than not. We wouldn't need phishing-resistant authentication methods if humans were good enough at making those kinds of decisions.


I have to admit that in all of my use of my Yubikey Neo in Chrome I don't recall ever being asked for permission to access the device. Firefox hasn't asked either.


I'm not saying that you need to grant any kind of permission in order to use U2F tokens, but rather that a user thinking "I want to login to Google" and "I need to use that USB key thingy to do that" is quite likely to accept a prompt that requests access to the U2F device.


Sorry, I guess what I was getting at is that in hindsight I'm surprised no browser ever explicitly asked me for access to the Yubikey or told me why it needed, I've just blindly trusted it because of the few sites I use it with.

On the other hand, it's basically functioning as another keyboard device and not a special USB device so it shouldn't be that surprising, right? (serious question)


>So don't do that.

How about you tell users to simply not enter their password into phishing sites?

When users want to do something (sign in) and there are instructions on the page telling them to do something (enter password or accept usb) then the users will do it.


Hopefully better support for U2F devices is on the way at both the browser and website level.

I wish more websites offered the option to use it.


Presumably you are referring to WebAuthn[1]. I am optimistic that this will lead to better browser support, and consequently better website support. IIRC it's expected to reach Firefox stable in the May release. Hopefully the various sites that currently only support U2F in Chrone will move to this new standard.

[1]: https://www.w3.org/Webauthn


I didn't even realize web USB was a thing now. I remember it being talked about, but I must have missed the HN conversation when Chrome implemented it.

So does Chrome ask for permission to allow USB access? Or maybe there's something about this I'm not getting.


There is a permission prompt, but it's fairly easy to convince users to accept it during a login attempt when they're expecting their USB U2F device to be used.


It's surprising that this works. Last time I checked WebUSB the device would have to have a descriptor allowing its use via web page effectively white listing what can be used on the web.


Looks like they changed it: https://wicg.github.io/webusb/#attacking-a-device

While I understand the reasoning behind that move, I'm not sure I fully agree with it. I don't think users will necessarily understand the implications of granting a site access to a device that wasn't designed with attacks from malicious code as part of its threat model. At the very least, the wording on the permissions dialog should be changed to indicate that the user is granting the site _full control_ over the device they're connecting it to.


Or disable webusb


Or just don't click "Connect" on the USB access permissions prompt when it pops up.

Unfortunately though, as with any phishing attack, this flaw is most likely to be effective against uninformed users, and those users are the least likely to take proactive measures to protect themselves beforehand.

Fortunately:

> "We will have a short term mitigation in place in the upcoming version of Chrome, and we're working closely with the FIDO Alliance to develop a longer-term solution as well."


What kind of uniformed user uses a YubiKey?

I supposed you could trick them by saying that the login process has changed and they need to enable WebUSB to let their YubiKey work


Uninformed users who have an informed friend looking out for them but not looking over their shoulder every single minute.


Not parent but great point, thank you.


This seems to indicate that DoD uses them. Perhaps it's mostly contractors, but there are probably some liaison-type uniformed people too:

https://www.yubico.com/about/reference-customers/department-...


The purpose of a Yubikey is to prevent users from making mistakes.

This phishing attack removes the benefit that Yubikeys provided.

Sure a smart users can decline the permission prompt. But a smart user can also simply not enter their password into phishing pages.


tqbf, pinboard, and zeynep are handing them out to journalists.

There is an enormous need for some solution resistant to users who aren't good at identifying legitimate vs phishing sites. U2F as it stands is the only practical and deployed solution to that problem. It's infuriating that chrome broke this security promise to compete with microsoft.


It sounds like the report glossed over a rather important bit of ux:

If, when logging in, you see a big modal dialog asking if you want to grant webusb access to the site, then DONT select your yubikey out of the list of connected USB devices and click "Allow".

As long as you can convince yourself to avoid taking that particular unusual action, it sounds like you're fine.


If the user thinks the site is Google it’s not that strange they’d give them access to the key.


You can disable CCID, does that not solve the issue?


It's almost as if browsers are slowing reinventing Java applets while ignoring all of the security implications that go along with it.


They're slowly inventing operating systems, complete with hypervisor technology, with all the gargantuan complexity that it implies, to please big business that wants the client OS to essentially become obsolete.


The web browsers are so much more secure than what we had before (just accepting executable binaries from other people), so I look at this as a way forward.


I'm not that confident. Browsers blindly accept and execute whatever they receive. The more features that get added, the larger surface there is to exploit. A case in point: WebUSB as mentioned in the article.


The nice thing though is that, although the added attack surface is there, its not really accessible to web pages until a user grants the necessary permissions. Not really all that different from telling users to execute a native app in that respect.

In this case it's not even an exploit really; more like social engineering. (Tricking users into granting the phishing site unrestricted access to their Yubikey, then using that access to trick the user into authenticating a login session for the phishing site.)


Imageine if there is an USB device with new Chrome WebUSB driver (which has necessary permissions) and then vendor's website gets hacked.


A browser is more secure than a linux namespace with SELinux rules that require explicit approval for any access?

A browser is more secure than Qubes?

The flaw is in legacy software, not in what is possible. Had humanity spent the effort that was spent on browsers on operating systems instead, we'd have had the same security improvements without all the negatives.


And yet, at quick glance, Chromium THREE TIMES more CVEs than the Java JRE...so it might be more secure than before, but lets not celebrate just yet!


When I see a shady link, I open it in a browser inside my virtual machine. I've had attacks on firefox that straight executed a binary on my machine, without me doing anything but clicking a link.


But if we ever stopped adding features, things might start working as intended.


I'm unclear as to how this would work in practice. Chrome supports U2F out of the box, so getting a big weird pop-up asking to access your USB device, you'd at least be suspicious.

Upon registration, the server also collects a nonce, which is used for verification[0]. The attackers would need to get that nonce from the site. Hopefully, the site disables CORS so a phishing site cannot request a challenge.

Lastly, on Linux (I know, a minority), you need to make an entry in rules.d[1] to even allow Chromium to access USB devices.

I can see how this potentially maybe could catch someone, but I don't see it as much of a risk.

[0]: https://blog.fastmail.com/2016/07/23/how-u2f-security-keys-w... [1]: https://developers.google.com/web/updates/2016/03/access-usb...


Part of the problem is that, assuming you didn't know much about how U2F works, it seems pretty natural for a site to request access to your YubiKey in order to use it to authenticate you.

While its obviously not a total solution, I do think that maybe the permissions prompt should be a bit more scary: https://developers.google.com/web/updates/images/2016-03-02-...

I'd rephrase that to something more along the lines of "example.com wants full control of". Maybe with an option for device manufacturers to opt-in to support for WebUSB, allowing for protocol enhancements to improve security and a less scary permissions prompt.


CORS is irrelevant.

>The attackers would need to get that nonce from the site.

The attackers have their own machine with a browser running on it that visits the real site and gets the nonce, then hands that nonce to the victim to be signed by their key.


What is the usecase for WebUSB? Here [1] someone from Google suggests vendors should write device drivers in Chrome HTML and Chrome Javascript. Please don't.

Or (my assuption) it might be for devices that cannot work without browser and network connection.

[1] https://developers.google.com/web/updates/2016/03/access-usb...


Well, we found it really, really useful! https://www.numworks.com/blog/webusb-firmware-update/


Yeah, the great thing about WebUSB is that it can easily be used to upgrade devices to new firmware with new features. For example, suppose that some end user's USB device lacks the ability to act like a USB Rubber Ducky and inject malicious keystrokes in order to compromise their machine. WebUSB allows a clean, easy way to fix that remotely.

WebUSB terrifies me.


Next step will be devices that cannot work without Chrome, network connection and "anonymous" "telemetry" for the purpose of "improving customer experience".


A website should never access to usb devices with just an allow prompt. Imagine taking control of a usb mouse or keyboard... You could then just take control of the machine...


Didn't lots of people suggest this kind of thing as a potential issue when WebUSB was first mooted?


I wish channel bound tokens were mandatory in the u2f spec, or a browser key was part of the auth request to the token, for exactly this reason. U2f is "optionally" unphishable.


FIDO discussed this on their site. It's optional so corporate firewalls that perform MITM can continue to work with U2F.


... sigh.


I mean, if you want a standard for everyone, it's hard to ignore where most people work.


How would that help? Couldn't the webusb simply lie to the u2f device about what the channel is?


Go take a look at token bound channels. It sure could but it'd be completely useless to do so.


I know a bit about token bound channels. But the u2f device only talks to Chrome via usb. So anything that the legitimate chrome could say to the u2f device (negotiating tokens, channels, etc) can now be done by the attacker via webusb. So I would think the attacker can get the u2f device's signature on the attacker's channel.

It should be just as if you unplugged your u2f device from your machine and plugged it into the attacker's machine.


It seems like i am often reading about reasons why Not to use Chrome..

Are there any good writeups about security of different web browsers?

is Chrome a real issue?


I can't imagine that there would be such a writeup. Even just trying to understand the security framework of one browser is a gargantuan task.

As for Chrome being a real issue, some points off the top of my head: - Its extension store breeds out malware in regular intervals (feels like there's headlines about that at least every other month). - Pretty bad autofill exploit that was left unfixed for years: https://github.com/anttiviljami/browser-autofill-phishing (Might've been fixed in the past year, I haven't checked, but I doubt it.) - Chrome Sync is not end-to-end-encrypted without the use of a second password, which effectively means that it is unencrypted for 99.9% of Chrome users. Google also actively uses this data, weaving your browsing history into the profile that they keep of you. So, if they ever have a data leak, a lot of data is going to come from people using Chrome, too. The NSA/CIA/FBI also tap into this data, possibly using it for cyber war attacks, so if you live in a country other than the USA, you're making yourself a prime target and an easy target by inputting this data through Chrome.


At this rate, you should rather write down the passwords in the paper and stick it to the computer.

Sure, it's vulnerable to the man behind your back. But if such threat exists, you shouldn't type your password anyway.


Well, that's ironic. Just yesterday I subscribed to a magazine that offers a Yubikey as a free gift. The magazine? Wired.

It looks like this vulnerability is limited to Chrome at the moment. Good to know if using Chrome (or even Epichrome for SSBs) when doing things like online banking.


U2F != OTP, which is Yubikey 4. For some reason the FIDO alliance decided they didnt want OTP.

Yubikey 4 allows openPGP keys as well as OTP Yubikey functionality, making it half HSM/half token. the FIDO keys offered by Yubi only do asymmetric cryptography.


>For some reason the FIDO alliance decided they didnt want OTP.

OTP is regularly phishable, not requiring any webusb. Before this webusb attack, u2f was unphishable.


The Yubikey is great and has uses outside of U2F, which I've never had much faith in.


U2F is actually a very cool spec, basically a user friendly version of client side certificates (which could be user friendly, but arent).

What are your concerns?


...this isn't a flaw in U2F.


The Ledger Nano S also uses WebUSB to sign into the Stellar (Crypto) dashboard and can be used as a U2F authentication device.

I'm curious (assuming even) that it could be subject to this exploit?


hmm. I assumed U2F does not protect you from phishing. It just adds a second layer of protection to your account. Protecting you from credential theft. U2F antiphishing stuff implemented by chrome is just a neat little extra. Is this behaviour of checking the origin in the spec?


Yes, preventing phishing by only sending credentials to the appropriate origins is a very important part of the spec: https://fidoalliance.org/specs/fido-u2f-v1.2-ps-20170411/fid...


The supposed ability to tap the yubikey button even on phishing sites and not actually give up working credentials was like the selling point.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: