Hacker News new | past | comments | ask | show | jobs | submit login

Perhaps someone with more experience with network protocols can explain the hype about SPDY, since I cannot seem to figure it out. Looking at the features Google is toting, I can't help but feel underwhelmed:

- Single Request per Connection. It seems that HTTP 1.1 already addressed this with pipelining.

- FIFO Queuing. I feel like the client is in a better position to know in what order the page needs to be rendered than the server. Why shouldn't the server respond in the order that the client asked for?

- Client Initiated Request. Wouldn't it be better inform the client of what it needs rather than just guessing that the client needs these files and sending them down the pipe? It seems that this feature might waste bandwidth, when it could have hit the cache.

- Uncompressed headers. For slow lines, compressed header might be nice if they were very large. That said, I think a better solution to compressing data is to not send it at all. (If you want to increase speed, do you REALLY need to send the User-Agent and Referer at all?) The smallest data is the one that isn't sent.

- Optional data compression. SPDY is forcing data compression? That seems wasteful of power, esp. for mobile devices when sending picture, sound, or video data.

Of course, this list is all just blowing smoke until its actually tested. However, I couldn't find an independent study of SPDY performance.




To address some of your points:

- pipelining is still single-file request-response though. With SPDY, you can send multiple requests at once, and the server responds to them in whatever order it likes.

- The point of removing the FIFO queuing is that the server can start responding with simple files before the more expensive resources are calculated. Usually the HTML itself can take a while to be generated server-side, where as CSS and JS files are usually just served straight off disk.

In a FIFO model, the 200ms the client is waiting for the HTML to be generated is just wasted. You could be using that time to be downloading CSS or JS or images, etc.

- There's two options for server-initiated requests in SDPY. One where the server says "Since you requested this resource, you'll probably also want this, this, and this." (i.e. it sends links to the client with the related resources). Other other option is where it actually says, "Since you request this resources, here's these other resources you might be interested in as well."

In the first case, the client can begin processing those other files (e.g. checking it's local cache or actually making a request for them) before the original resource has completed downloading/parsing. In the second case, it could be that the original resource and the "sub-resource" (e.g. HTML file and attached CSS file) have similar caching rules, so if a client requests one it's likely that it'll request the other anyway.

- SDPY also has options for not including those kinds of things (e.g. User-Agent, Host, Accept-*) on every request. But even when you do that, compression still has benefits. Even once you've removed all the redundant data, compression will still help, so why wouldn't you?

- I agree there's certain kinds of content which don't benefit greatly from compression. But on almost all platforms, CPU power is much higher than network capacity. In fact, I can't think of a single platform where that's not the case...


Http pipelining is quite different. When the server takes a long time to respond to a single request, it stalls the entire pipeline. To efficiently use parallelism you have to start multiple http tcp connections, but that is less efficient than a single connection because they all have to slow start.

As for header compression, have you checked how large headers are these days? They can easily be 1.5 kb.


- Pipelining would be pretty cool if it worked. There are a number of problems with it: * head of line blocking * transparent proxies that don't support pipelining properly * error detection - FIFO queuing: Why is the client in a better position to know in what order the page needs to be rendered than the server? Isn't the server the one that knows all the resources that need to be sent to the client? - Client initiated request: Yeah, server push is a complicated area. But there are some cases where server push is clearly superior. For example, data URI inlining ruins cacheability. It's better to server push an already cached resource that can be RST_STREAM'd than inline a resource and make it larger and uncacheable. - While we'd like to get rid of headers as much as possible, it's still impractical to completely eliminate headers like User-Agent. - SPDY does not force data compression, and optional data compression has been removed in draft spec 3.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: