Hacker News new | past | comments | ask | show | jobs | submit login
How speedy is SPDY? [pdf] (usenix.org)
70 points by ldng on Jan 15, 2015 | hide | past | favorite | 25 comments



> Most performance impact of SPDY over HTTP comes from its single TCP connection

This is not a surprise at all, because Google never tested SPDY against HTTP pipelining. And at least in their mobile test they included the TCP connection time for HTTP but not for SPDY; I suppose their test software just reused the same SPDY connection since the mirrored web pages all were served from the same IP.

They compressed sensitive headers with data leading to CRIME attacks.

They had a priority inversion causing Google Maps to load much slower through SPDY than HTTP.

This new protocol is a complete mess, from beginning to end.


Leaving aside the technical statements about SPDY, the reality of HTTP pipelining is that no-one uses it. According to Wikipedia, Opera is the only major browser that ships with pipelining enabled. Most intermediaries don't support pipelining either.

Pipelining was a well-intentioned feature which didn't solve the core problem: namely, that a big or slow request can block you from doing anything else for a really long time unless you open another TCP connection.


Except that Microsoft tested SPDY against pipelining and found that pipelining was essentially just as good. So we're left with a situation where Google could have used HTTP pipelining over SSL (so there's no buggy proxies interfering, just like what SPDY does) and gotten pretty much all the benefit with no extra complications at all, but instead there's old HTTP and a new, much more complicated protocol.

And this "head of line blocking" problem... who said it was a problem, Google? In reality you have 4 or more connections that automatically work like spread spectrum where most resources aren't stuck behind a big or slow request. But even if this was an actual problem, a simple hint in the HTML that the resource might take a while and to put other ones on a separate connection would optionally solve this problem, and with almost no extra complexity.


Except that Microsoft tested SPDY against pipelining and found that pipelining was essentially just as good.

Can you point to that?


I think parent is referring to http://research.microsoft.com/pubs/170059/A%20comparison%20o...

It's a bit sketchy on the details and data - reading it I certainly end up with more questions than answers


> the reality of HTTP pipelining is that no-one uses it.

Hmmm, starting to use it seems like less effort than introducing an entirely new protocol.

> the core problem: namely, that a big or slow request can block you from doing anything else

Why is that the core problem?


>> the core problem: namely, that a big or slow request can block you from doing anything else > Why is that the core problem?

Head of line blocking leaves the network idle, and one of the key factors to good front end performance is to overcome network latency and start using the network / growing the congestion window as soon as possible.


I'm pretty sure Opera doesn't ship with it enabled any more, since the move to Chromium. There were all kinds of crazy heuristics to work out whether to use pipelining or not.


"...Google never tested SPDY against HTTP pipelining."

I use pipelining on a daily basis. And, vis-a-vis using HTTP/0.9 or 1.0, I have never been dissatisfied. It just works. (Before you comment, please note I never said I make my HTTP requests using a particular browser.)

This is why I can never take SPDY for what they say it is (or what the silly acronym suggests).

And if in fact the goal was faster speeds and in fact this alternative was faster, why should I accept the opacity it introduces? Does this protocol make it easier or more difficult to view one's own traffic?

Once more, with HTTP pipelining I only request the resources I want. In my case, this does not include advertisements. Is that possible with SPDY? I do not know, but I doubt they would promote such fine grained selection as a feature.

If HTTP pipelining does not work satisfactorily in [ad-supported browser], does that necessarily mean it does not work, full stop? I have tested it thoroughly and my answer is no; but I could be biased.

Because HTTP pipelining has worked beautifully for me over the years, I would be quite disappointed if it were supplanted by some protocol introduced a company that relies on ad sales to stay in business.

End rant. Sorry, but I am not a fan of SPDY.


Very nice research, kudos to everyone involved.

I agree with the conclusions, mostly the very last one.

> To improve further, we need to restructure the page load process

To fully utilise the potential of HTTP/2, we will have to rethink the way we create and manage websites. I've posted more thoughts on this on my blog; https://ma.ttias.be/architecting-websites-http2-era/


It's not surprising that introducing high loss through a network emulator results in reduced performance of a single TCP connection vs. multiple connections. That's because there's a relationship between the maximum bandwidth a single TCP connection can carry and the packet loss % due to TCP's congestion avoidance. Introducing "fixed" packet loss through an emulator isn't necessarily a good representation of a real network where packets would be lost due to real congestion (an overflowing queue).

Throwing many TCP connections into a congested network can let you get a higher share of that limited pipe though...


Packet loss on wireless networks often have a certain rate of packet loss unrelated to congestion, due to having not a great signal or interference.

It's somewhat humorous because wireless networks are one of the main purported benefits of SPDY.


Kudos to them for releasing their data and tools [1]! This is how science should work.

[1] http://wprof.cs.washington.edu/spdy/data/


That's a nice study.

The main result is that most of the benefit comes from putting everything through one TCP pipe. This, of course, only works if almost everything on a site comes from one host. This is a good assumption for Google sites, which communicate only with the mothership. It's not for most non-Google sites.


And for those sites, you can always use HTTP pipelining and avoid the whole SPDY can of worms.

Looking at the facts, it seems pretty obvious that whatever theoretical gains you can get in select scenarios with SPDY, this super-minor gain is not worth it compared to the associated complexity cost.

Not to mention I don't like the idea of Google now not only running the worlds tracking units, the worlds most popular browser and most popular websites, but now also dictating internet-protocols without taking input from other parties.


They did not dictate anything, but proposed a protocol. It was adapted and changed.

Also, this is not part of a big, evil Google master plan. The engineers who developed it are well known and they presumably tried to do their best from a technical point of view.


What's the complexity cost in SPDY or HTTP/2's case?

For most optimised HTTP/1.x sites there's already a complexity cost of merging JS files, merging CSS files building sprites - including the tradeoff of getting the bundles right, which of course reduces cachability.


>the tradeoff of getting the bundles right, which of course reduces cachability

If you are revving your bundles with hashes (main.8a4ce55.js), caching shouldn't be a problem. Not sure what your build process is, but there are plugins to do this on most setups.


That's precisely the point revving bundles with hashes is about destroying cachability.

When two separate resources together at build time they're being coupled from a caching point of view if one gets changed and the other doesn't they they both have to be re-downloaded because the bundle has changed.


> For most optimised HTTP/1.x sites there's already a complexity cost of merging JS files, merging CSS files building sprites - including the tradeoff of getting the bundles right, which of course reduces cachability.

And all of this is a build-time problem.

If we're going to engineer the HTTP-protocol to solve build-tooling and development related problems, we might as well add JS-linting and minifying to HTTP itself as well.

Seriously: This problem is best solved elsewhere.


You've got it wrong the way around these aren't build-tooling and development related problems, they're problems with HTTP/1.x that we chose to solve using the build process.


Other parties sat around and didn't come up with and implement a good protocol, did they? Which browsers have pipelining enabled, 15 years after the spec? And at least SPDY has some binary format, not the optimized-for-composition-in-notepad text format.

Just testing SPDY on a site I was building last year, it seemed to provide double digit speedups. Not bad for my effort of typing SPDY into nginx.conf.

So yeah, I dislike Google. I use FF and disconnect search. I hate Google's intrusiveness and anti privacy stance. But at least SPDY provides actual benefits today, versus complaining about standards.


Excuse me if I think that sounds like a "We have to do something. This is something. So we must do this"-type argument.


More like "We have to do something to improve the situation. This is something that helps out. We have no other viable options at the moment. So we must do this."


Trivia: the PDF is in a folder called "protected-files" - LOL




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: