Hacker News new | past | comments | ask | show | jobs | submit login
Firefox zero-day was used in attack against Coinbase employees, not its users (zdnet.com)
261 points by ga-vu on June 20, 2019 | hide | past | favorite | 95 comments



Article poses a good question. How did a privately reported zero-day leak from Bugzilla into an attacker's arsenal?

Also, why did it take 2 months to fix an RCE? It's an RCE, not some XSS. I'd imagine this would be a high-priority. No?


TFA's first answer is the most likely. If one researcher discovers a vulnerability, another researcher can also discover it. No "leak" required.

I agree that RCE should be a priority!


I'm betting on insider access. Microsoft had to lock down internal access to their security bugs when some employees were selling the bugs on the black market.


For what it's worth, Mozilla locks down internal access to security bugs too. I can't see those bugs, which is exactly how it should be, as I have no need to know.


How many people can read these?


As a Mozilla employee I can say that I was in the security group but lost access at some point since I wasn't very active.


How’d you get access in the first place?


I helped out with UI related security bugs (e.g. address bar spoofing) which we had a bunch of at the time.


I don't know.


Do you have a source for that? Google's not giving me anything. I'd definitely like to know more - I can't help but wonder how widespread that kind of behavior is.


Locking security bugs from wide internal read access has been SOP everywhere I've worked for decades.


I think they're asking for a source on the specific claim about Microsoft employees selling bugs on the black market, which is what I would also like to see.

I don't need to be convinced that security bugs should be on a need-to-know basis during the responsible disclosure period, that seems obviously prudent. Anyone not working specifically on security can learn about the details at the same time as the wider public.


I don't know anything about that event, but it reminds of me when 20 Apple contractors had a scheme selling Apple user data for $7M.

https://www.nytimes.com/2017/06/09/business/china-apple-pers...


No source, but I'd be willing to bet it's very widespread.


If it is insider access they will be caught.


Really, that Mozilla would let a reported RCE vulnerability simmer for two months until it bit someone would seem to reflect very poorly on their priorities and competence. Can anyone postmortem why it took so long now that it's fixed?


Firefox likes to bundle security fixes into .0 releases. 67.0 was released May 21 (and went to nightly/beta whatever May 13) and 68.0 won't be released for a few more weeks.


Is there a good reason for this? I would think that a security issue should be addressed and patched into user's computers as soon as possible, especially something like RCE.


Security fixes carry the usual risk of regressions (even more than the average bug, when the fix limits something that used to "work"). Therefore they need just as much bake time as other kinds of changes.

Also, shipping security fixes in stand-alone updates makes it much easier for attackers to identify security-critical changes (especially if they have access to source code, which they do for Firefox) and reverse-engineer the flaw. Firefox developers often land critical fixes with somewhat obscured commit messages to increase the work required by attackers to identify the critical security fixes in the torrent of commits that go into each regular release.

Obviously this only makes sense while the bug is believed to be unknown to attackers. If Mozilla believes the bug is being exploited, they can and do issue an emergency update.


> Firefox developers often land critical fixes with somewhat obscured commit messages to increase the work required by attackers to identify the critical security fixes in the torrent of commits that go into each regular release.

Wow, that's fascinating. Do you have any interesting reads to point to in this regard?


Do you know why? Isn’t a security fix a bug fix?


Nope. Security vulns are not regressions!


And how do you qualify "Meltdown" and it's notorious bad fix "Total Meltdown" in that case ?

To me, the bug fix introduced a clear regression, allowing an even more powerful vuln in the process.


I’m confused what do you mean? Fixing security vulns can often times lead to regressions since overtime users become dependent on a behavior that relies on a insecure behavior.


Secure behaviors should generally trump API guarantees.


Your parent comment didn't say security fixes couldn't lead to regressions, they said security vulns themselves aren't regressions.


> How did a privately reported zero-day leak from Bugzilla into an attacker's arsenal?

This was a VERY valuable bug. I mean, it's sad to think about but the most likely scenario is that someone with access to the report at Mozilla or Google (or maybe elsewhere if it was shared more widely) called a friend of a friend of a friend and... sold it.


Moreover, people are bad at keeping secrets. Social engineering is clearly a thing, even among infosec circles. Sometimes all it takes is being in the right bar and having a good ear.


I mentioned the possibility of an untrustworthy person gaining access to bugzilla yesterday but it seems that most people disagreed with it: https://news.ycombinator.com/item?id=20221397



>from Bugzilla into an attacker's arsenal

Typically, they leak the opposite way.


A good example of why HTML emails need to be downgraded to plain text in financial companies, and links stripped and checked before it's shown.


Now if only every company and their mother would stop using third parties for links.

I don't care if you track that I clicked on your link. I care that your link doesn't appear to go to the same site your reply-to email would go.


> I care that your link doesn't appear to go to the same site your reply-to email would go.

From one of the last emails I read this evening: "we would like to inform you that there is a form on our website" [me]: The form is not on your website, only the link to it.

You're right, but it's much more simple to disallow links, people don't really understand the difference on a website, emails are another additional complication.


How would you recover accounts if links weren't permitted?


I'm late to this, but...

1. while developing -> send tokens for copy/paste if a user has chosen text-only emails

2. while administering -> institute the policy of having to ask one of the administrators, if it happens too often you have worse problems in any case


I work in finance, and we do something like this. HTML is not scrubbed, but all links are sanitized/scanned/vetted before we can visit them. Kind of annoying, because it can 5-10 minutes after receiving an email with a link in it before it's cleared. Considering I work in market data and a lot of vendors send out out important notices about delays, holidays, etc as a brief blurb and then a link to more detailed info, it gets annoying. But, I understand why its done, and it's important to do so. Understanding doesn't make it any less annoying, though.


The sad reality today is that a truly plain text email looks "phishy" to most end users today, as the result of 15+ years of facny HTML mails.


Not really. The exploit involved getting the victim to click a link to the attacker's page, so plain text emails wouldn't prevent anything.

>attackers would send a spear-phishing email luring victims to a web page, where, if they used Firefox, the page would download and run an info-stealer


You can strip the links and replace them with a man-in-the-middle link so that you couldn't just directly click on the link.


And then what? You show them the original link and they click it again?


there's a million different things you can do. You can simply strip out all links. You can strip out links and only allow white-listed links, etc. Once the site has been vetted then it could be allowed to be clicked on. Or you can just have a big javascript alert box that said "Remember, you are clicking on a link that is unvetted and it could steal your credentials. Be careful." I don't know, be creative.

Anything that will wake people up and stop them from just blindly clicking on things. For a financial institution like Coinbase where a hacker could compromise the security of the entire company, it doesn't seem completely unreasonable.


Then the attacker could buy ads on a site they suspect you will visit.

As long as the employee need to be able to browse the internet any whitelisting of links seems like a waste of resources.


>Or you can just have a big javascript alert box that said "Remember, you are clicking on a link that is unvetted and it could steal your credentials. Be careful

That might work for the first day or so, but you'll eventually tune them out and blindly click pass the warning.


I am 100% on board with this solution.

Email does not benefit much from format when doing communication.

Marketing and sales is a different story.


Microsoft has the option to run all links in incoming mail through their system. Whether or not that would have caught a FF zero day on the target site is another question. Where I work this was implemented after people started getting (spear) phishing mails.


its Interesting how cryptocurrencies have provided an economic incentive for exploiting zero days. It’s hard to keep an exploit a secret when there’s such huge potential payoffs.

The level of sophistication in crypto hacking would terrify me if were a crypto startup employee.


If I were a crypto startup I'd be more worried about insider threat than anything sophisticated.

Protecting against a rogue employee also adds defence against employee computer compromise.


Insider threat is probably non-existant. You know all your employees and if a hack happens there is an easy list of people to investigate. Also trying to cash out large amounts of stolen crypto is going to be difficult with the amount of blockchain analysis going on and all fiat exchanges requiring KYC.


Insider threat is absolutely a risk.

The other risk if you don't fully and publicly mitigate insider threat is someone applying pressure to your employees to do something bad. (The intel community and other high risk environments have long had this threat model). This is basically identical to insider threat at time of pressure, although there are some different countermeasures leading up to it.

The low end of this is catching someone doing something they shouldn't (browsing porn on a work computer, having a relative with legal difficulties, etc.) and applying that as leverage. Usually "report early, no action will be taken against you" is a good policy for minor things.

I would NEVER expect (or want) someone to do anything but fully comply with an attacker who has kidnapped his kid and credibly threatens to do something horrible unless he authorizes a payment. My instructions to the insiders are "comply; we have technical countermeasures which will make those attacks fail".


KYC should really be renamed CYA (cover your ass), because it's borderline useless as a real security measure against determined threat actors. For anyone willing to commit fraud and/or identity theft, it would be relatively easy to "pass" a KYC process under an assumed identity, especially when no in-person/biometric verification is required.


I wouldn’t be so sure. There have been situations where insiders stole crypto and then claimed they were hacked to cover it up. Looks like Gelfman Blueprint is an example and there are almost definitely others.


> there is an easy list of people to investigate

If the criminal has an ounce of brains, they'd execute the hack while on an overseas vacation.


Or if the insider slowly siphoned off tiny amounts from lots of accounts over a long period of time. That sort of thing would probably be very hard to catch.

I am not likely to notice one day if I have 4.96551 ETH and the next 4.96530 ETH


A good wallet would show the last transactions prominently. Also you can setup email alerts for ETH transfers from/to your account with a tool like etherscan.io.

That's the advantage of public ledgers, it's way easier to monitor for abnormalities. You don't need to tell your employees about all the checks you have put in place either.


>I am not likely to notice one day if I have 4.96551 ETH and the next 4.96530 ETH

note to self: only keep round amounts in online wallets.


According to this story (https://news.bitcoin.com/looting-fox-sabotage-shapeshift) the threat is real and at least one person got away with it.


Speaking as someone who works for a crypto-startup, it's hardly soothing to know that we are a gigantic target for hackers. Also have the joy of being consistently bombarded with phishing emails pretending to be from other employees.


For internal funds use Multi-sig and require all signers to use hardware wallets. As for contracts, formally verify them using KEVM and get audited by a reputable cybersecurity firm like Trail of Bits.


Enable advanced protection on your personal Google account while you're at it. Bit of hassle, lot of benefit.


I am more interested in how Coinbase employees discovered the attack. I am assuming nobody clicked the suspicious link and instead took it to a vm for reversing and analysis. It would have been game over if the exploit was actually executed on a non-sandboxed machine.


Notice unusual connections coming from one laptop, then use a second system to click on all the links they clicked the day before?


I’d love to know how Coinbase discovered the exploit — whether on the employee desktops, due to unusual activity by the employee account on internal Coinbase systems, at the company network level, by a human or robot, etc.



Thank you! Great that they shared the IOCs and IPs associated with the attack. That thread doesn’t really describe how they discovered it though, right?


They left out one other opportunity: they purchased the 0-days from a broker or company like Zerodium. The cost might be worth it to them if there were high, perceived odds of getting in.


Is it really that easy to get access to the exploits of the acquisition program? Throw them a bunch of money and it's yours?


I'm not saying it was easy. I have no idea how those programs or the brokers supplying them work. In this hypothetical scenario, they could be regular customers operating within the suppliers' expectations. Alternatively, a broker has some 0-days that the big-name companies aren't buying or not at that price. Potentially already has them. Some other party is willing to buy them at a nice, but reduced, price.

When I did thought experiments on it, one of the big issues for me was how to show buyers the vulnerability without losing money from them stealing it or deal with them claiming that they already had it in a way that minimizes risk to all parties. It was a tricky problem. Folks selling on the side was a potential result in some scenarios.


Do we know how the exploit would be used? If you are accessing your mail account in the same browser, are you at risk?


Serious question, is Chrome considered to be more secure than Firefox by cybersecurity professionals? And if so, why?


Why were Coinbase employees allowed to use Mozilla Firefox?


Why wouldn't they be allowed to use Firefox?


[flagged]


Are we forgetting that Chrome had a zero-day literally three months ago?

https://securingtomorrow.mcafee.com/other-blogs/mcafee-labs/...

> Google is aware of reports that an exploit for CVE-2019-5786 exists in the wild.


While I'm a Firefox user, that's actually evidence to xtalh's point: that bug by itself is useless, because Chrome's sandbox meant that taking over the rendering process is not enough. They only managed to escape on Windows 7, thanks to another bug (in Windows itself - CVE-2019-0808).


Note that Firefox has a sandbox too (in fact, it shares a good bit of code with that of Chrome), and therefore a sandbox escape is necessary to elevate privileges.

(NB: I have no knowledge of the details of this specific bug.)


Leaving aside whether it makes sense to call something "useless" that was actually used in the wild, the original article specifically mentions (twice) that the Firefox RCE 0-day was also sandboxed and also only managed to escape thanks to another 0-day.

(And I'm actually a Chrome user, for now.)


You almost make it sound like Chrome weren't written in C++, or that the typical Chrome install weren't hosted on a few 10s of millions of lines of C/C++


Probably because 85% of the browser market it's Chromium-based, while the rest is not. Hence, a smaller attack surface.


That's not what "attack surface" means. Firefox would have a smaller attack surface if it supported fewer file formats or protocols or such than Chrome.


True. A smaller target base would have been the correct term.


They are not allowed actually. However maybe a few were anyway? As far as I can tell, this attack was fully unsuccessful anyway.

They probably discovered phishing attempts with a link to a page deploying a curious payload.

Regardless of my post above, keep in mind I do use Firefox primarily and see nothing wrong with it.


> Why were Coinbase employees allowed to use Mozilla Firefox?

Nowhere in article it is said that any Coinbase employee was using Firefox. It only says that attack targeted Firefox, not that Coinbase employees use Firefox.


A better question would be: why were Coinbase employees allowed to use any browser with javascript enabled and outside of a VM? Qubes OS has been a thing for quite a while.


> why were Coinbase employees allowed to use any browser with javascript enabled

I don't know, maybe because they need to get work done...? Even traditional banks allow JS.


I've worked at a large traditional bank (market cap and enterprise value are both around 100b), they also allowed firefox as well as js, at least for developers (I don't know what it looked like for non developers).


Of course, there generally are legal processes to leverage if money is stolen from a bank. The cryptosphere isn't as forgiving.


Google and Stackoverflow work just fine without javascript enabled. Trustworthy sites can be whitelisted if absolutely necessary.


It's this sort of attitude that makes sysadmins so incredibly popular among the masses.

Hint: if your environment feels like a concentration camp, users will find ways to work outside of it most of the time - which will be even more disastrous.


That's a fair point when literally hundreds of millions of dollars aren't on the line. It's not hard to properly secure your system from all manner of internet threats. There's no excuse for crypto exchanges not to implement such measures.


If hundreds of millions of dollars are one JS exploit away, the defense model is flawed. That sort of movement should require approvals from multiple people and even dedicated terminals that are not used for everyday browsing.

Security is a tradeoff; nuking browsers for everyone is just a bad tradeoff in 2019.


There are a million hypothetical security issues you could worry about. How would you weigh the risks of Javascript against the loss of basically all online productivity apps?


A standard VM doesn't protect from attacks of this level of sophistication.


I imagine it does unless the attacker has an additional XEN zero-day to pile on.


In the thread about this attack yesterday someone linked a paper about another attack against cryptocurrency researchers which did use a VM escape exploit [1], so if a cryptocurrency researcher is worth such an exploit, I'd say a company handling the kind of money Coinbase does is probably worthy as well.

[1]: https://news.ycombinator.com/item?id=20221279


Have they considered rewriting it in Rust?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: