Hacker News new | past | comments | ask | show | jobs | submit login
Faster Webpages with Cookieless Domains (With Keynote Testing) (symkat.com)
26 points by symkat on Sept 22, 2010 | hide | past | favorite | 13 comments



We were using cookieless domains at quizlet.com for our static assets (css, js, images) for awhile, but recently turned them off. Some of our users (enough to be of concern) wrote in saying the site looked all wrong, and on closer inspection, weren't receiving our assets.

Because we're used in a lot of school systems, and many school districts have weird rules (domain whitelists), our static domains were not resolving while our main quizlet.com was. We decided (for now) it wasn't worth investigating how many of our users were affected or if we could do a workaround.

We're still using a subdomain off our main domain (a.quizlet.com) which prevents most cookies from sending but not al.

Going forward, I'd prefer to start base64-embedding images in css (only for compliant browsers) and using more sprites to reduce the number of overall requests.


What? a few bytes removed from the HTTP request and you double the speed..??? There is something I don't understand here.

I suspect the improvement comes exclusively from increasing the limit on the number of connections, by way of now using two sources instead of one, rather than from the removal of a few bytes in the HTTP request.

Unless I missed something...


TCP Slow start plays a big role here. The fewer the packets required to carry the request, the faster the complete request can be addressed by the webserver, leading to a faster download.

http://en.wikipedia.org/wiki/Slow-start

Essentially your client waits for ACKs from the server when it sends its first packets. The greater the RTT the higher the penalty. This is part of why DC -> NJ didn't see much benefit but CA -> NJ saw a nice boost.

There was a great talk on this at Velocity Conf 2010: http://en.oreilly.com/velocity2010/public/schedule/detail/11... - I have not yet found video, but you can flip through the slides.

Checkout this visualization of the slow start and you can see how sending even one fewer green dots over a large distance could considerably change the overall time.

http://vimeo.com/14439742


More or less: yes. It's the combination of both removing those excess bytes.

If you transmit 10 static files per page request, and have 256bytes of cookies you're sending 2560 bytes of cookies, or 2KiB per page load. That's on the end user's upload bandwidth too, which is typically dramatically less than their download.

The lessening of the connection limit also allows more entities to load at the same time, the combined effect is a good increase in speed. Two tweaks for the price of 1. =)


It doesn't quite work like that. TCP works in packets, so 256 bytes of cookies does not necessarily translate into an extra packet sent over the network. So if you have 10 static files per page, you might push a few of them over to an extra packet, but not all of them.

It's something worth doing, but it's not the biggest win you can get.


The piece is called 'cookieless domains'. Not 'using static domains for increased connections'.

If the actual speed increase is from something totally unrelated to the piece's conclusion then this is really misleading.

The static domain trick has been around donkeys.


A few extra data points about using subdomains vs different domain for static content (assuming that you serve your site exclusively off www.domain.com)...

OK: Google Analytics - You can use the function "_setDomainName('www.domain.com')" on your GA tracker to restrict the cookie domain to only the www.

NOT OK: Quantcast - The Quantcast tracking code explicitly forces the cookie domain to be ".domain.com", and the only way around it would be to alter their javascript and host it from your own server, which they do not allow (though I am not sure what the recourse would be). I emailed support about it, and they said they had no plans to let you specify a different cookie domain.


Something interesting to do is see the source for google.com, msn.com

Compressed really well and built for speed. Then view the source at yahoo.com

Links to look at for performance. yslow http://developer.yahoo.com/yslow/

page speed http://code.google.com/speed/page-speed/docs/rules_intro.htm...


This is a total micro-optimization at best. Nearly every client is going to support compressed content. The difference between minified then compressed and unminified and compressed is tiny.

You are right though the difference probably does matter on the scale of Google, MSN, and Yahoo, but I wouldn't focus on it for my site.


15% of clients don't do compression, usually because of "internet security" software altering the Accept-Encoding header.

From the talk you linked to in another comment... http://en.oreilly.com/velocity2010/public/schedule/detail/11...


I have been aware of domains like yimg.com (yahoo static content) forever, but it never occurred to me why you'd bother doing this until I saw this article. Thanks.



I've played with this and the performance improvement is trivial with modern browsers/servers.

Way more hype than reality.

Before messing with cookies, when it takes over a second to receive the core webpage (before the other objects, stylesheets, scripts, images) in Chrome no less, you need to take serious look at how you are rendering the page on the server, ie. WordPress. You get enough cache misses with WordPress+plugins and your servers are going to be crying.

The article is also missing some important info for analysis, how large was each of the transmitted parts? How large was the base webpage? Was gzip compression used on the server?

Why not use localhost to eliminate transmission variances and prove with 10000 cacheless page fetches if cookie-less really helps or not more than a few percentage points?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: