Hacker News new | past | comments | ask | show | jobs | submit login
Side-channel attacking browsers through CSS3 features (evonide.com)
368 points by drchiu on May 31, 2018 | hide | past | favorite | 78 comments



Equally worth surfacing:

  Habalov reported the bug to Google and Mozilla engineers,
  who fixed the issue in Chrome 63 and Firefox 60.

  "The bug was addressed by vectorizing the blend mode 
  computations," Habalov said. Safari's implementation 
  of CSS3 mix-blend-mode was not affected as the blend 
  mode operations were already vectorized.


Block quotes are unreadable on mobile, so here's the unformatted text:

Habalov reported the bug to Google and Mozilla engineers, who fixed the issue in Chrome 63 and Firefox 60.

"The bug was addressed by vectorizing the blend mode computations," Habalov said. Safari's implementation of CSS3 mix-blend-mode was not affected as the blend mode operations were already vectorized.


I agree and it's annoying. Is there an easy way to fix that?


When posting a quote, don't indent the text. Just prepend '> ' instead, like this:

> I agree and it's annoying. Is there an easy way to fix that?

Readable on mobile, and makes clear what is/isn't a quote.


There is, remove the max-width from the <pre> element and change white-space of the <code> element to e.g. pre-line.


What does "vectorizing the blend mode" mean in this context? I don't get it.


There is a similar vulnerability with `mix-blend-mode` that enables a script to check whether a user has visited any given URL. It has existed since as early as 2016 and is still a problem today in Chrome 67. See: https://lcamtuf.blogspot.com/2016/08/css-mix-blend-mode-is-b...

And here is a demo: http://lcamtuf.coredump.cx/whack/

It has limitations which make it slightly impractical to check against a large number of sites but it's still surprising to see this hasn't been fixed.


That's very broken. It misses lots of websites I did visit and shows me websites I did not visit as though I did.


Make sure to clock the visible mole, not elsewhere in the page


Does not matter. This attack has not work since 2016.


Works in Firefox Nightly 62.0a1 very reliably


Works on chrome desktop and chrome mobile for me.


This goes beyond just identification to actual data leakage, but ties into a larger point about GDPR and other regulations that we've been discussing ad nauseum here, which is questions like: "Why don't the GDPR regs just tell me what I need to do?"

The GDPR being deliberately vague is a response to issues like these, it serves neither the interests of the users or the regulation to say "Get consent before getting cookies and only cookies" only to have sites and services turn around and use methods like this, or:

BrowserFingerprinting - https://panopticlick.eff.org/

Super/Ever Cookies - https://github.com/samyk/evercookie

HSTS identification - https://www.owasp.org/index.php/HTTP_Strict_Transport_Securi...

or whatever the new sneaky identification via browser features hack happens to be.


The whole "tell users we're using cookies" thing is idiotic -- every site uses cookies, and that's fine. The trouble lies not in the site's use of cookies, but in the use of cookies by third-party content providers used by the site.

It's the FB, Twitter, and other "like" and "share" buttons that are the problem.


This is kind of the subtlety that I'm getting at here: what the GDPR says is you need to say how you (as a controller operating a website) are tracking people.

Because the minute you throw down a technical rule it's trivial to route around it. Case in point: say you allowed only 1st party cookies (as defined by cross domain browser behavior and cookie setting). I could setup a CNAME for analytics.my-domain.com that points to FB's ad retargeting servers and they could include in the actual cookie data my id, ip or whatever else they'd need to lookup a user's information across domains.

Same with IPs: every GDPR discussion devolves into someone saying: "Well every site uses IPs! We have them in our logs, etc.! It's stupid to say that they're personal data" Which is how I thought of them until getting a glimpse into some of the ad networks/bidding where they are treated as more or less a currency for targeting, demographics, etc.

With GDPR it's not the data it's what done with the data.


I mean, there's nothing wrong with statutes and regulations being worded generally as long as they are not vague. Getting too prescriptive in this particular context would be counterproductive as the technology issues change over time.

What GDPR should say: things like "do not allow third-parties to track your users" or "warn users about third-party tracking". That would be exceedingly clear and actionable.

Now, "do not allow third-parties to track your users" might require some additional regarding implementation via contractual clauses. For example, if site A uses resources from site B and has a contract with B stipulating non-tracking of A's users, is that good enough? What if B is outside the EU? And so on. But aside from such side issues, "do not allow third-parties to track your users" is trivial to implement: a) only embed resources from third-parties that agree not to track your users, b) do not embed resources from any other third-parties.

"Warn users about third-party tracking" is even easier to implement: if you embed resources from third parties, you must warn.

> With GDPR it's not the data it's what done with the data.

Good! You can use contracts to manage this with all the third-parties you deal with.

Now, what about links to [non-embedded] external resources? We must not kill the web.


Ok, so you're describing (IMO) exactly what's in the text of the GDPR.

Here is a "plain english" translation of the GDPR that makes much of this more evident:

https://blog.varonis.com/gdpr-requirements-list-in-plain-eng...

If you dig into it, I think you'll find it lines up nicely with what you are suggesting.


You can't make a first party cookie work the same as a third party cookie. They wouldn't follow you across sites, which is the whole point of a third party tracking cookie. Facebook wouldn't know it's you visiting a site or know the same person visited site A and site B.


You're correct if you're _only_ using cookies.

My point was that it could be combined with the other tracking means (say a browser fingerprint or HSTS fingerprint or both) and that value be transferred in a "first party" domain that was really just a CNAME'd third party server -> and you have a solution that would meet the requirement to "only set 1st party cookies" but would miss the intent of the regulation.


If you had a reliable way to fingerprint browsers and tie them to a Facebook ID, then you don't need cookies at all.


You don't have to warn the users if your cookies are first-party cookies that makes your site function.


This is a very very good thing. You do not want the government writing technical specifications. They WILL screw it up and we WILL be stuck with it.


Agreed. Also they have been trying to introduce insecure cryptography algorithms that are doing terrible in peer review and they won't detail the weakness of the algo or how to prevent attacks on it. I'm thinking its because they want to know the 0 days and keep it from everyone else.


Being legally vague is advantageous, it incrementally introduces the idea that power gets to regulate what people remember.


The gist of it is that there's a timing attack on fancy stacks of blending modes. Calculating a final pixel color takes different amounts of time for different underlying pixels. So javascript can "scan" and OCR a page, or an iframe in that page.


All doors are open for side channel leaks. Having programs run on your machine is like having someone watch your computer. The issue is how to control the exfiltration of that data from you computer. That's where the sandbox should be focused on.


Not sure why this is being downvoted. I admit that I find it hard to see how to control the exfiltration of data once it's been figured out by a badly sandboxed program. But the idea that systematically addressing side channel attacks in a sandbox is really hard seems very valid. I mean, the exploit described in TFA is quite the argument in favor of this point.


I wonder if we'll ever get back to "go ahead and browse random script-free websites, but only run trusted code". I guess it's way too late for that. We'll be patching side-channels forever.


Yes, this isn't even the first cross-domain leakage attack on iframes using CSS. [0] There were similar issues with how hit testing was implemented for `document.elementFromPoint()`[1], and probably tons of other things I'm forgetting.

Ideally cross-origin framing would have been disallowed by default but frames were added to the spec before people spent a lot of time thinking about the same-origin-policy implications.

[0]: https://www.contextis.com/resources/white-papers/pixel-perfe... [1]: http://blog.saynotolinux.com/blog/2014/02/05/whats-that-smel...


It's surprising to me that anything should be allowed to overlap on top of iframes. This seems to be a continuous vector for other security issues, such as fake buttons over the Facebook "Like" button, etc.


That would break a lot of things. For example you couldn't use a modal on a site that had a facebook like button on it.

Click jacking is generally avoided by the `X-FRAME-OPTIONS` header or the CSP frame-ancestors option.


> For example you couldn't use a modal on a site that had a facebook like button on it.

That sounds great; are there any other side benefits available?


A cross origin Iframe could get completely blanked out (no rendering and no interaction, maybe replaced by a custom background defined by the parent page) as soon as a single pixel from the parent page is rendered above it.


Are you sure? Maybe the modal couldn’t cover the Like button and it might be weird if the user could still click Like while your modal is up but it wouldn’t break your page if this were the case.


I don't understand what you are proposing. If the modal can't cover the like button, how would you implement a modal? How would you implement the standard background fade when a modal is open (which is a div covering the entire page)? How would you propose implementing this change such that it doesn't require millions of websites to change their code?


A full background fade is an aesthetic thing at best, and it would be disingenuous to fully fade a page if some parts are not in fact blocked (i.e. remain clickable). So I think it would require no changes, it would just look different.

Having said that, I see no reason to support Like buttons OR modals and I would be fine with millions of sites being forced never to use them. We know Facebook Like buttons are over-engineered trackers whose domains are best blocked at your router. And when modals are not being abused for things users don’t want, the remaining “good” uses for modals would still be better implemented as less-intrusive and more-asynchronous things (display a modeless background message with buttons, for example).


It seems to me that you've never worked at a company with a marketing team.


The real consequence would likely be that such buttons could not use iframes anymore. Which would probably be good thing anyhow - in the long run.


Or, said more succinctly, clickjacking


Alternative interpretation: Iframes are so overpowered and such an edge case for the browser's security models that they cause constant issues with the rest of the reasonable browser spec.


Iframes will never go away.

But we need a way to tell a browser "if you embed this page into an iframe, it needs to be the topmost content, no transform/translate/visibility or anything". Social/login iframe would set that flag, and it would prevent the clickjacks and attacks like this.


All websites are supposed to set the header X-Frame-Options: DENY to block iframes. It's a solved problem.


That doesn't solve the issue of clickjacking attempts on pages meant to be in iframes (FB like buttons are in iframes)


That solves clickjacking for things which don't want to be iframed, but not for things like the Like button or login buttons which specifically need to be embedded into an existing page (and they were the attack vector for this article)


iframes have been a "no-no" since web 1.0.

Still no up-front browser setting for disabling them. I wonder why...


Worth noting that this general category of render-timing-based pixel-reading has been around for a while. e.g. https://www.contextis.com/media/downloads/Pixel_Perfect_Timi... is from 2013, uses svg instead, and I could swear I've seen a CSS one from a few years ago.

I really suspect there's only one fix, and it seems reasonable to me: don't allow sites to place content over iframes, period. Allowing it opens all kinds of exploits like this and various flavors of clickjacking, all basically un-preventable since there's no limit to the techniques possible.


I don't think this can really be blamed on CSS. I mean, yes, a new CSS feature allowed a timing attack that could extract information, but these kinds of issues keep on coming up time and time again, meaning that it's not this particular CSS feature that should be blamed.

The issue is the ability to interact in any way with cross site resources which contain any kind of potentially sensitive information. This leads to things like clickjacking attacks, CSRF, information leaks like this, and so on.

The things that could be done to fix it:

1. Don't allow any kind of cross-site embedding (yeah, this isn't going to happen) 2. Treat any kind of cross-site embedding like private browsing mode; don't ever send any credentials along with it 3. Don't allow the embedding site to interact in any way with embedded content. Treat it like an entirely separate, opaque layer above everything else, not subject to layering anything over it

Of course, I don't think any of these are actually going to happen, because they'd break too much. But otherwise, it's going to be a game of whack-a-mole with information leaks, new kinds of clickjacking and CSRF, and so on.


> 3. Don't allow the embedding site to interact in any way with embedded content. Treat it like an entirely separate, opaque layer above everything else, not subject to layering anything over it

I'm actually surprised to hear that this is not the fix here. Instead, they optimized the rendering code, which might not preclude more sophisticated attacks on that side channel.


As noted elsewhere, this would cause sites with models or menus to appear broken, when the menus or models are stuck behind the iframe. Yes I know, don’t make sited with iframes and litghtboxes or whatever then, but people do, and the option they chose won’t break those people.


That's similar to what Kaminsky proposed with Iron Frame[0], but obviously it'd have to be opt-in. Applying Iron Frame-like rendering to all iframes would break a lot of content.

[0]: https://dankaminsky.com/2015/08/09/defcon-23-lets-end-clickj...


You can get 2. more or less in firefox with privacy.firstparty.isolate = true or various container extensions

Or one could write an extension that sets sandbox attributes on iframes. I'm not sure if one exists yet.


Building a powerful sandbox that maintains anonymity is a difficult problem. The statement that CSS is overpowered is clickbait for a different argument.


"Besides Habalov, another researcher named Max May independently discovered and reported this issue to Google in March 2017."

I wonder why they are fixing it now, when they didn't do anything for over a year


The disclosure is being published now. From the timeline at the bottom of the article, it becomes clear that they only ignored the issue for 2 months.

That is still more than zero delay, but a bit more reasonable.


How is that? >"2017-12-06 Fixed with Chrome version 63.0

2018-05-15 Fixed with Firefox Quantum version 60.0" That's 9 months for Chrome and more than a year for FF Quantum?


But also "2017-11-26 Reported the vulnerability to Mozilla’s VRP" because "due to some misunderstandings on our side, reporting the vulnerability to Mozilla was delayed".

So I think "more than a year" is a little unfair to Mozilla.



This is really a timing side channel attack. Such side channel attacks have been known about for a long time but this one allowed you to leak information about the color of a single pixel in your target page. I wouldn't say this is due to CSS being overpowered, it is much more due to the way that iframes can be configured to show a single pixel and the way that rendering times for your page are leaked into JavaScript. If your rendering is not done in constant time then the vulnerability occurs and CSS in this case allowed you to trigger non constant time rendering. The fix was to make the rendering constant time again.


This line particularly caught my eye:

> It was quite surprising for us to find out that the blend mode layers were able to interact with cross-origin iframes in the first place so we investigated this further.

Because a few years ago I was wondering the same thing; Someone figured out a side-channel timing attack using SVG filters, exploiting differing execution times of code paths in the Erode filter (probably also in the Dilate filter but they only needed one). IIRC, they could even apply it to an iframe with a source:// URL of another domain (probably FB again), of which they could control the scrollbars and scroll into view the session key that showed up in the HTML source somewhere. This raised a couple of questions with me ...

Why can we put a source:// URL in an iframe at all? Why can we control the scrolled position of content in a cross-domain iframe? And why are SVG filters capable of processing cross-domain content instead of just seeing a black square or something?

AFAIK they fixed it by changing the filters, making sure all code-paths were of equal length. Which is part of the right solution because side-channel attacks can come from anywhere (it's kind of their thing). But making the whole system more robust by just not allowing operations on cross-domain content would do a whole lot for plugging many of these bugs.

I'm kinda surprised there haven't been a lot more of these. Could be I've missed them though.


This is a really interesting vulnerability, and really interesting research, but the clickbait title-with-an-agenda is really terrible. Can we change it?


> Habalov says that depending on the time needed to render the entire stack of DIVs, an attacker can determine the color of that pixel shown on the user's screen.

I wonder how this is the case?

Anyway, lots of interesting vulnerabilities of late that utilize the measurable time of computations, and use it to reconstruct data. I think that type of "lossy attack" is so cool and creative.


A fancy darkening algorithm on #000 can be short-circuited. I don't know if that's precisely what's going on here, but it's a general example.


The link was changed to point to Habalov's own in-depth article [1].

[1] https://www.evonide.com/side-channel-attacking-browsers-thro...


This isn't the first time a color-exfiltration vulnerability has been found in CSS. Disappointing that we're revisiting this same issue.


I believe this is also why there's not a CSS equivalent of Adobe's Pixel Bender shader system.

They're worried that someone could write a shader that would detect sensitive info (like a credit card number) and jank the render thread in an observable way to leak the number to a script running in the parent window.


Actually, CSS shaders are a thing, but can't access DOM content pixels for exactly that reason.

http://alteredqualia.com/css-shaders/article/


Expecting browsers to prevent cross-site exfiltration and tracking is pretty iffy. When it really matters, even preventing that at machine level is pretty iffy.

Compartmentalization among multiple machines is better. Maybe sandboxing aka light virtualization can be adequate. But I'd rather go with at least full virtualization. And when it really matters, I use multiple hosts, with network isolation.


I am fine with locking down cross-origin iframes, but one major annoyance is the inability to detect whether the iframe has a scrollbar and if so, adjust the size until it doesn't have one. I really wish that there was a "height: auto" property for iframes that automatically adjusted the height so that it didn't need a scrollbar.


Once upon a time the specs planned a <iframe seamless>, but unfortunately it was never widely implemented. Probably because it also tried to do things like letting the outer CSS leak into the seamless frame.


Short of something drastic like Lynx, is there a good dumbed-down browser from a reputable source? It feels like there's no good check on site complexity. The security risk, but also the absolutely embarrassing resource consumption is almost too much to bear.


You don't realize it, but by saying this you are rejecting the modern Web (insert horror screams here). So maybe try the old passive web: Gopher.


Hah, maybe a middle ground vs gopher. The GDPR mitigation from NPR (https://text.npr.org/) was refreshing. The amount of effort for display capabilities above and beyond moving around plain old text + some markup for layout has been totally out of proportion to the end-user value. I kind of wonder things like, "I wonder how much the memory/cpu it takes to run Slack (just on my laptop) vs total at NASA to design and managed the Apollo 11 mission". Anyway, that's OT enough. It's just that this is such a ridiculous vector that it's irritating.


text.npr.org is older than the GDPR

https://news.ycombinator.com/item?id=15342758


Thank you! I didn't know that.


Even with these issues the main browsers are going to be the most secure because it's very easy to screw things up certificate validation. A way to prevent these sort of attacks is to use a javascript whitelist using an extension like NoScript.


Now that is crafty.


Is there a mirror? Site's down for me





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: