I'm betting on insider access. Microsoft had to lock down internal access to their security bugs when some employees were selling the bugs on the black market.
For what it's worth, Mozilla locks down internal access to security bugs too. I can't see those bugs, which is exactly how it should be, as I have no need to know.
Do you have a source for that? Google's not giving me anything. I'd definitely like to know more - I can't help but wonder how widespread that kind of behavior is.
I think they're asking for a source on the specific claim about Microsoft employees selling bugs on the black market, which is what I would also like to see.
I don't need to be convinced that security bugs should be on a need-to-know basis during the responsible disclosure period, that seems obviously prudent. Anyone not working specifically on security can learn about the details at the same time as the wider public.
Really, that Mozilla would let a reported RCE vulnerability simmer for two months until it bit someone would seem to reflect very poorly on their priorities and competence. Can anyone postmortem why it took so long now that it's fixed?
Firefox likes to bundle security fixes into .0 releases. 67.0 was released May 21 (and went to nightly/beta whatever May 13) and 68.0 won't be released for a few more weeks.
Is there a good reason for this? I would think that a security issue should be addressed and patched into user's computers as soon as possible, especially something like RCE.
Security fixes carry the usual risk of regressions (even more than the average bug, when the fix limits something that used to "work"). Therefore they need just as much bake time as other kinds of changes.
Also, shipping security fixes in stand-alone updates makes it much easier for attackers to identify security-critical changes (especially if they have access to source code, which they do for Firefox) and reverse-engineer the flaw. Firefox developers often land critical fixes with somewhat obscured commit messages to increase the work required by attackers to identify the critical security fixes in the torrent of commits that go into each regular release.
Obviously this only makes sense while the bug is believed to be unknown to attackers. If Mozilla believes the bug is being exploited, they can and do issue an emergency update.
> Firefox developers often land critical fixes with somewhat obscured commit messages to increase the work required by attackers to identify the critical security fixes in the torrent of commits that go into each regular release.
Wow, that's fascinating. Do you have any interesting reads to point to in this regard?
I’m confused what do you mean? Fixing security vulns can often times lead to regressions since overtime users become dependent on a behavior that relies on a insecure behavior.
> How did a privately reported zero-day leak from Bugzilla into an attacker's arsenal?
This was a VERY valuable bug. I mean, it's sad to think about but the most likely scenario is that someone with access to the report at Mozilla or Google (or maybe elsewhere if it was shared more widely) called a friend of a friend of a friend and... sold it.
Moreover, people are bad at keeping secrets. Social engineering is clearly a thing, even among infosec circles.
Sometimes all it takes is being in the right bar and having a good ear.
I mentioned the possibility of an untrustworthy person gaining access to bugzilla yesterday but it seems that most people disagreed with it: https://news.ycombinator.com/item?id=20221397
> I care that your link doesn't appear to go to the same site your reply-to email would go.
From one of the last emails I read this evening: "we would like to inform you that there is a form on our website"
[me]: The form is not on your website, only the link to it.
You're right, but it's much more simple to disallow links, people don't really understand the difference on a website, emails are another additional complication.
I work in finance, and we do something like this. HTML is not scrubbed, but all links are sanitized/scanned/vetted before we can visit them. Kind of annoying, because it can 5-10 minutes after receiving an email with a link in it before it's cleared. Considering I work in market data and a lot of vendors send out out important notices about delays, holidays, etc as a brief blurb and then a link to more detailed info, it gets annoying. But, I understand why its done, and it's important to do so. Understanding doesn't make it any less annoying, though.
Not really. The exploit involved getting the victim to click a link to the attacker's page, so plain text emails wouldn't prevent anything.
>attackers would send a spear-phishing email luring victims to a web page, where, if they used Firefox, the page would download and run an info-stealer
there's a million different things you can do. You can simply strip out all links. You can strip out links and only allow white-listed links, etc. Once the site has been vetted then it could be allowed to be clicked on. Or you can just have a big javascript alert box that said "Remember, you are clicking on a link that is unvetted and it could steal your credentials. Be careful." I don't know, be creative.
Anything that will wake people up and stop them from just blindly clicking on things. For a financial institution like Coinbase where a hacker could compromise the security of the entire company, it doesn't seem completely unreasonable.
>Or you can just have a big javascript alert box that said "Remember, you are clicking on a link that is unvetted and it could steal your credentials. Be careful
That might work for the first day or so, but you'll eventually tune them out and blindly click pass the warning.
Microsoft has the option to run all links in incoming mail through their system. Whether or not that would have caught a FF zero day on the target site is another question. Where I work this was implemented after people started getting (spear) phishing mails.
its Interesting how cryptocurrencies have provided an economic incentive for exploiting zero days. It’s hard to keep an exploit a secret when there’s such huge potential payoffs.
The level of sophistication in crypto hacking would terrify me if were a crypto startup employee.
Insider threat is probably non-existant. You know all your employees and if a hack happens there is an easy list of people to investigate. Also trying to cash out large amounts of stolen crypto is going to be difficult with the amount of blockchain analysis going on and all fiat exchanges requiring KYC.
The other risk if you don't fully and publicly mitigate insider threat is someone applying pressure to your employees to do something bad. (The intel community and other high risk environments have long had this threat model). This is basically identical to insider threat at time of pressure, although there are some different countermeasures leading up to it.
The low end of this is catching someone doing something they shouldn't (browsing porn on a work computer, having a relative with legal difficulties, etc.) and applying that as leverage. Usually "report early, no action will be taken against you" is a good policy for minor things.
I would NEVER expect (or want) someone to do anything but fully comply with an attacker who has kidnapped his kid and credibly threatens to do something horrible unless he authorizes a payment. My instructions to the insiders are "comply; we have technical countermeasures which will make those attacks fail".
KYC should really be renamed CYA (cover your ass), because it's borderline useless as a real security measure against determined threat actors. For anyone willing to commit fraud and/or identity theft, it would be relatively easy to "pass" a KYC process under an assumed identity, especially when no in-person/biometric verification is required.
I wouldn’t be so sure. There have been situations where insiders stole crypto and then claimed they were hacked to cover it up. Looks like Gelfman Blueprint is an example and there are almost definitely others.
Or if the insider slowly siphoned off tiny amounts from lots of accounts over a long period of time. That sort of thing would probably be very hard to catch.
I am not likely to notice one day if I have 4.96551 ETH and the next 4.96530 ETH
A good wallet would show the last transactions prominently. Also you can setup email alerts for ETH transfers from/to your account with a tool like etherscan.io.
That's the advantage of public ledgers, it's way easier to monitor for abnormalities. You don't need to tell your employees about all the checks you have put in place either.
Speaking as someone who works for a crypto-startup, it's hardly soothing to know that we are a gigantic target for hackers. Also have the joy of being consistently bombarded with phishing emails pretending to be from other employees.
For internal funds use Multi-sig and require all signers to use hardware wallets. As for contracts, formally verify them using KEVM and get audited by a reputable cybersecurity firm like Trail of Bits.
I am more interested in how Coinbase employees discovered the attack. I am assuming nobody clicked the suspicious link and instead took it to a vm for reversing and analysis. It would have been game over if the exploit was actually executed on a non-sandboxed machine.
I’d love to know how Coinbase discovered the exploit — whether on the employee desktops, due to unusual activity by the employee account on internal Coinbase systems, at the company network level, by a human or robot, etc.
Thank you! Great that they shared the IOCs and IPs associated with the attack. That thread doesn’t really describe how they discovered it though, right?
They left out one other opportunity: they purchased the 0-days from a broker or company like Zerodium. The cost might be worth it to them if there were high, perceived odds of getting in.
I'm not saying it was easy. I have no idea how those programs or the brokers supplying them work. In this hypothetical scenario, they could be regular customers operating within the suppliers' expectations. Alternatively, a broker has some 0-days that the big-name companies aren't buying or not at that price. Potentially already has them. Some other party is willing to buy them at a nice, but reduced, price.
When I did thought experiments on it, one of the big issues for me was how to show buyers the vulnerability without losing money from them stealing it or deal with them claiming that they already had it in a way that minimizes risk to all parties. It was a tricky problem. Folks selling on the side was a potential result in some scenarios.
While I'm a Firefox user, that's actually evidence to xtalh's point: that bug by itself is useless, because Chrome's sandbox meant that taking over the rendering process is not enough. They only managed to escape on Windows 7, thanks to another bug (in Windows itself - CVE-2019-0808).
Note that Firefox has a sandbox too (in fact, it shares a good bit of code with that of Chrome), and therefore a sandbox escape is necessary to elevate privileges.
(NB: I have no knowledge of the details of this specific bug.)
Leaving aside whether it makes sense to call something "useless" that was actually used in the wild, the original article specifically mentions (twice) that the Firefox RCE 0-day was also sandboxed and also only managed to escape thanks to another 0-day.
You almost make it sound like Chrome weren't written in C++, or that the typical Chrome install weren't hosted on a few 10s of millions of lines of C/C++
That's not what "attack surface" means. Firefox would have a smaller attack surface if it supported fewer file formats or protocols or such than Chrome.
> Why were Coinbase employees allowed to use Mozilla Firefox?
Nowhere in article it is said that any Coinbase employee was using Firefox. It only says that attack targeted Firefox, not that Coinbase employees use Firefox.
A better question would be: why were Coinbase employees allowed to use any browser with javascript enabled and outside of a VM? Qubes OS has been a thing for quite a while.
I've worked at a large traditional bank (market cap and enterprise value are both around 100b), they also allowed firefox as well as js, at least for developers (I don't know what it looked like for non developers).
It's this sort of attitude that makes sysadmins so incredibly popular among the masses.
Hint: if your environment feels like a concentration camp, users will find ways to work outside of it most of the time - which will be even more disastrous.
That's a fair point when literally hundreds of millions of dollars aren't on the line. It's not hard to properly secure your system from all manner of internet threats. There's no excuse for crypto exchanges not to implement such measures.
If hundreds of millions of dollars are one JS exploit away, the defense model is flawed. That sort of movement should require approvals from multiple people and even dedicated terminals that are not used for everyday browsing.
Security is a tradeoff; nuking browsers for everyone is just a bad tradeoff in 2019.
There are a million hypothetical security issues you could worry about. How would you weigh the risks of Javascript against the loss of basically all online productivity apps?
In the thread about this attack yesterday someone linked a paper about another attack against cryptocurrency researchers which did use a VM escape exploit [1], so if a cryptocurrency researcher is worth such an exploit, I'd say a company handling the kind of money Coinbase does is probably worthy as well.
Also, why did it take 2 months to fix an RCE? It's an RCE, not some XSS. I'd imagine this would be a high-priority. No?