Hacker News new | past | comments | ask | show | jobs | submit login

Last year, I had spent about a week in benchmarking SPDY. Server push didn't show any improvements to page load time for even first-time visitors. The whole benchmark was scripted with Chrome running in a dummy X server. The round-trip time was artificially constrained to about ~100ms.

Even if it were to show an improvement in some situations (different round trip times or network speeds), it would be in the microseconds to couple of milliseconds range; with a large variance.

Given that it degrades performance for second-time visitors, I would recommend not enabling it without further benchmarking.




We have a lot to learn on how to use server push effectively. That said, let's analyze some actual use case..

a) Page A currently inlines half dozen small assets on every page. These inlined resources inflate every page and are delivered at same priority as HTML. By contrast, a "dumb push" server delivers these same assets as individual resources via push. Net benefit? Basically same performance since inlining is a form of application-layer push. However, a smart server can at least prioritize and multiplex push bytes in a smarter way... Now, let's make the server just a tiny bit smarter. If it's the same TCP connection and the server has already pushed the resource, don't push it on next request. Now we're also saving bytes... <insert own smarter strategy here>.

b) Page B has two CSS and one JS file in the critical rendering path. Without push you either inline all three (ouch), or, you roundtrip to the client and get it to parse the HTML to discover these resources and come back to the server... With push, the server can avoid the extra RTT and push those assets directly to the client -- this is a huge performance win. How does the server know? Well, either you made it smart.. or you use some adaptive strategy like looking at past referrers and building a map of "when client requests X, they also come back for Y and Z" - this is already available in Jetty.

The fears of extra bytes are also somewhat exaggerated (they're valid, but exaggerated). The client, if they desire, can disable push. The client can also use flow-control to throttle how much data can be transferred in first push window (both FF and Chrome do this already). Lastly, the client can always decline and/or RST a push resource.

Some additional resources: - http://chimera.labs.oreilly.com/books/1230000000545/ch12.htm... - http://www.igvita.com/2013/06/12/innovating-with-http-2.0-se...


These ideas sound fine on paper, but for first time visitors (on a warmed up browser), server push didn't show any improvements to page load times in my benchmarks. My question is why enable server push at all?

I am no longer with the company for whom I performed the benchmarks, that's why I can't publish them. Perhaps there are other benchmarks out there, that show server push is more performant. If so, I would be happy to see them.


I don't know the details of your benchmark methodology or setup (server / client), so not sure I can offer a meaningful response... short of: let's not confuse "my benchmark failed" with "the future doesn't work". Anything from a poorly implemented server (broken multiplexing, flow control, prioritization, etc), to bugs in past versions of Chrome...

Jetty guys had a couple of nice demos they showed off at various conferences. Here's one: http://www.youtube.com/watch?v=4Ai_rrhM8gA




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: