Hacker News new | past | comments | ask | show | jobs | submit login
Http 2.0 specs - September 2013 (http2.github.io)
17 points by redcrusher on Sept 19, 2013 | hide | past | favorite | 7 comments



I see all the arguments for a need to optimize HTTP, but by going binary and increasing the complexity like we're seeing here, I think we lose something valuable.

A huge part of what I know I owe to tinkering with protocols in the late 90ies. Being able to send an Email using Telnet on port 25 was one hell of an eye-opener to me. But even today - being able to quickly debug a HTTP thing using telnet is incredibly handy.

Yeah. 1.1 will remain and with it some of the debugability. But then what you are deubgging quickly is not what browsers are going to see. Yes. You can add more tools to the mix to help you, but I still think we lose something (quite like going to a binary syslog format, btw).

Is the speed increase to gain from HTTP/2.0 really worth the loss of discoverability and the increase of complexity? It's my feeling that the connections are getting faster more quickly than optimizing HTTP would gain us.

If HTTP over TCP is inefficient, can't we try to "fix" TCP? Yeah - that'll be really hard, but so will be to get the Upgrade-header to work in order to do HTTP/2.0 over port 80. Too much stuff is interfering with HTTP these days (maybe also a result of the high readability of the current protocol - I don't know).

I wonder whether these aspects are part of the discussion currently happening or whether this feeling of mine is just an effect of me getting old.


It's not that involved to decode a standard binary protocol. At least not involved enough to justify keeping every single user on the web on a less efficient implementation only to facilitate casual debugging with plain text tools.

I'd compare that to gziping your logs and using zgrep and zcat and z<tool>. Sure it's a bit more involved but it's definitely worth the savings to gzip everything.


>Is the speed increase to gain from HTTP/2.0 really worth the loss of discoverability and the increase of complexity?

There is some data here: http://www.chromium.org/spdy/spdy-whitepaper

Header compression resulted in an ~88% reduction in the size of request headers and an ~85% reduction in the size of response headers. On the lower-bandwidth DSL link, in which the upload link is only 375 Kbps, request header compression in particular, led to significant page load time improvements for certain sites (i.e. those that issued large number of resource requests). We found a reduction of 45 - 1142 ms in page load time simply due to header compression.

I'm probably biased though since I'm stuck on a slow connection despite living in a densely populated part of the UK. Connections here won't be improving till 2015 at the earliest. I dread to think how much larger the average web page will be by 2015, both in total size and number of requests.


While I agree readability is valuable, HTTP 1.1 allows too much and is not really sane. See http://www.and.org/texts/server-http for example.


> Is the speed increase to gain from HTTP/2.0 really worth the loss of discoverability and the increase of complexity? It's my feeling that the connections are getting faster more quickly than optimizing HTTP would gain us.

I believe optimizations in HTTP2/SDPY is more about reducing latency on the server side as much as possible, especially when a single page can have multiple resources offsite. The quicker HTTP can process them all into one single connection to the client, the better the experience.

This cannot be fixed by getting faster internet connection.


You can take a look at the discussion here: http://lists.w3.org/Archives/Public/ietf-http-wg/


Sorry, we can't try to fix TCP, if we want wide deployment. I think we have learned that by now.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: