Hacker News new | past | comments | ask | show | jobs | submit login
HTTP/2 is here. Goodbye SPDY? Not quite yet (cloudflare.com)
267 points by akerl_ on Dec 3, 2015 | hide | past | favorite | 69 comments



In terms of optimal performance for end users... I should now be hosting all files on my own server w/ Cloudflare rather rather something like Google's CDN? For example, jQuery. Reason being, is that those files will all load in parallel on my own domain, whereas for another domain like Google, it'd have to renegotiate an SSL connection and wait a bit longer?

Is this correct? Or is there more to it than that?


You are correct.

What I'm now doing is reducing the number of third party domains I call.

In essence, where I used to use cdnjs.cloudflare.com or whatever other externally hosted JS or CSS, I'm now mostly self-hosting, but still behind CloudFlare.

You can see this in action on https://www.lfgss.com/ which is now serving everything it can locally... only fonts and Persona really remain external.

I have been using preconnect hints to try and reduce the latency created by contacting those 3rd parties, but TBH the fact that I use SSL as much as possible meant that those connections take time to establish. In that time, most of the assets can be delivered over my already open connection.

There is an argument that cdnjs/Google CDN or whatever is better for the web, but personally I'm unconvinced. I think one should self-host/control all of the JavaScript that runs on your own site, and that unless the exact versions of the exact libs are cached in end user browsers the benefits are not even there.

This also looks to be a smarter thing to do anyway; the increasing prevalence of ad-blocking tech is impacting 3rd party hosted assets, and thus the experience of your users. You can mitigate that by self-hosting.

I haven't obliterated first-party extra domains, for example I still use a different domain for uploaded assets by users. This is a security thing, if I could safely do it I'd serve everything I can from just the one domain.

Basically: Self-host, http/2 has brought you the gift of speed to make that good again.


If your first-party extra domains are advertised in your SSL cert, then Chrome at least will use the same connection for those assets too.

See this: https://blog.cloudflare.com/using-cloudflare-to-mix-domain-s...


The first party extra domains use a different domain and .tld altogether.

A bit like how google.com is for maps and anything users upload go to googleusercontent.com.

LFGSS is served from www.lfgss.com and the user assets go via lfgss.microco.sm, and proxied user assets (another level of distrust altogether) are going via sslcache.se .

I own all of the domains, and they're on the same CloudFlare acount, but we don't yet offer ways to give users control over which domains get SNI'd together, and this is especially true when the domains are on different CloudFlare plans.

That said... it's cool. To reduce everything from 8 domains down to 3 or 4 is a significant enough improvement that I'm happy.


I think the case for hosting jQuery and the like, on external, presumably cached CDNs is overstated. Library version fragmentation and to a lesser extent, CDN fragmentation, have to be weighed against the cost of the additional connection.


For HTTP/2 it's a lot better to have fewer domains since they download in parallel as you say. I think the only advantage of hosting jQuery from a shared place would be caching between servers. CloudFlare already acts as a CDN for your static files.


Wouldn't caching be a big thing for libraries like jQuery? Its highly-likely that jQuery was used by one of the other most recent sites a user visited... why not still take advantage of the fact jQuery may be cached locally?


But perhaps not as likely that one of those sites had exactly the same version of jQuery.


Because most browser caches are insanely small and really eager to evict stuff, especially on mobile phones.


Ideally, CloudFlare would allow you to route specific paths to different backends so you could just aggregate multiple services within your single domain for the fewest connections and DNS lookups.

Feels crazy but this makes me think to proxy imgix, which uses Fastly (not supporting SPDY or HTTP/2 yet), through CloudFlare. I'll just set up CNAMEs on my imgix account that are subdomains of my main domain, then add them to CloudFlare with acceleration on - but no caching (since imgix serves images by user agent). This adds an extra datacenter to datacenter hop, but hopefully that's really fast and upgrading the client to SPDY or HTTP/2 would outweigh that.

Anybody else tried something like this?


Coming soon.


Awesome! With that + HTTP/2 server push, we'll really be flying.

We already started to proxy our S3 / CloudFront assets through our load balancer so they can be cached and served through the SPDY (now HTTP/2) CloudFlare connection. However, since we're using imgix to serve different images by device, we can't allow CloudFlare to cache.

I've set up some tests to proxy Fastly through CloudFlare and my initial tests are inconclusive as to whether the crazy extra hop is worth it. It seems that if we have tons of images, it probably will be faster, but most of our pages only load about 6 images above the fold and lazy load everything else, so that might be why the difference is negligible. I'll have to test on a page where more images download concurrently to see if 1 extra hop to get SPDY and HTTP/2 is worth it.


One advantage of a service like CDNJS for a resource used by a number of unique sites, like jQuery, is that the resource will often be in the browser's cache. That value diminishes quickly if the particular version of the resource and the location from which it is served is not widely used. So, for widely used resources like jQuery, it can still make sense even in an HTTP/2 world to use a third party service. On the other hand, other HTTP/1.1 performance techniques, like domain sharding, can actually substantially hurt HTTP/2 performance.


You're not quite correct. In most cases, yes, but for JQuery specifically, your users will get better performance by continuing to use the Google CDN.

The reason being that most likely they've already got it in their cache and won't make a new call to Google (or you) at all.


There are way too many versions in use everywhere, not to mention way too many CDNs, for this to have much of an impact.

CF is already a CDN so it's better to just pipe all the assets through a single connection rather than the more likely chance of setting up another connection just for jquery.


I really wish Microsoft gave HTTP/2 support to IE 11 on windows 8/8.1. Any insight as to why they decided not to support it on IE 11 for windows < 8.1 would be appreciated.

Many of our users are stuck with windows 8/8.1, or even 7 for many more years unfortunately. Some of them won't even have another browser as an option(enterprise...).


See my point below about ALPN. You need to advertise support for HTTP/2 in the TLS ClientHello and this logic is contained (on a Microsoft platform) in schannel.dll.

Microsoft first added ALPN to SChannel in 8.1 and RARELY updates this library outside of OS releases, so that's (at least one reason) why you won't see on Windows 8/2012 Server.


They're not even updating IIS 8.5 to support HTTP/2 as far as anyone can tell, you'll have to update to windows server 2016 to get it.


"Updating" IIS to support HTTP/2 means updating http.sys, something they are not keen on doing without a major OS upgrade.


Nor do they like updating schannel.dll (the underling SSL/TLS stack) unless there's an extremely serious vulnerability in it. And even then, they bungle it more often than not (http://www.infoworld.com/article/2848574/operating-systems/m...).

The reason SChannel matters is that the protocol used to negotiate which "next generation" protocol is to be used for the HTTP connection (not session, minor point) is something called Application-Layer Protocol Negotiation. ALPN is a TLS extension sent as part of the ClientHello but wasn't added to SChannel until Windows 8.1/2012 R2 Server. (There was a predecessor to ALPN called NPN that Adam Langley authored/implemented for Chrome but Microsoft never implemented it.)


I'm quite surprised that there are a lot of browsers in the wild that support SPDY, but not HTTP/2, given auto-updating. But that's what their numbers show. Maybe mobile skews this?


I think it has more to do with "old" IE11 version on Windows<10: See http://caniuse.com/#search=http%2F2 vs. http://caniuse.com/#search=spdy

A awful lot of companies still use IE and not Windows 10 ;)


The caniuse.com data on http/2 appears to have some flaws. Biggest buckets for browsers that support SPDY but not HTTP/2 for our website right now are: a) Chrome for mobile b) Safari on older Mac OS X versions c) Older Chrome for desktop versions d) Internet Explorer (small impact) Other websites might see different ratios depending on their audience.

Stay tuned for instructions on how to gain protocol version insight for your own website on CF.


thanks for more insight statistics :)


Hmm, does anyone know how to support SPDY and HTTP/2 on a nginx>=1.9.5 and which has only module "ngx_http_v2_module" build inside? What is the configuration for nginx to support SPDY and HTTP/2 ?


We developed our own patch to NGINX that allows it to support both SPDY/3.1 and HTTP/2 and to negotiate correctly. Stock NGINX allows you to have one or the other, but not both.


Would be nice to see that patch open sourced (at least I can hope) ;) :)


I'm sure we will. We open source pretty much everything we can (i.e. we don't open source stuff that's too complex to extract from our business logic).


Stay tuned for that. Not all gifts at once. ;-)


You can't: "This patch replaces the SPDY module in NGINX."

https://www.nginx.com/blog/http2-r7/

"Before installing the nginx‑plus‑http2 package, you must remove the spdy parameter on all listen directives in your configuration (replace it with the http2 and ssl parameters to enable support for HTTP/2). With this package, NGINX Plus fails to start if any listen directives have the spdy parameter.

NGINX Plus R7 supports both SPDY and HTTP/2. In a future release we will deprecate support for SPDY. Google is deprecating SPDY in early 2016, making it unnecessary to support both protocols at that point."


I believe it's either or but not both.


Does anyone know if they support HTTP/2 on the backend side too ? They didn't with SPDY and I think it would help to multiplex connections all the way.


Not right now. We are currently experimenting with Server Push because we think it will help with the end user experience more than HTTP/2 to the origin server. You can see that running on the experimental server https://http2.cloudflare.com/

The question is... does HTTP/2 on the backend help that much. We aren't restricted like a browser in terms of bandwidth, latency or number of connections we can open. Greatest benefit for HTTP/2 is between browser and us, but origin HTTP/2 hasn't been forgotten.


I've love to see CloudFlare enable admins to utilize Server Push without extra configuration on their backend.

My ideal situation is one where I can have my webapp specify it's dependencies through a spec such as Server Hints[1], and have them be requested and cached edge-side, turned into a Server Push to the end user.

[1]: https://www.chromium.org/spdy/link-headers-and-server-hint/l...


Stay tuned.


Surely HTTP/2 to the origin server will reduce latency between Cloudflare and the origin server and thus reduce overall latency to the browser?


Yes, but you have to look at the mix of things being delivered to the client. For a typical CloudFlare customer you'll have some dynamic HTML delivered all the way from the origin and then a bunch of static assets delivered from our cache. HTTP/2 between us and the browser is very valuable there, less so on the origin connection.


I see. That's fair enough. Thanks.


Wouldn't HTTP/2 to origin give most of the benefits of Railgun, other than the change delta acceleration?

Seems like for high traffic sites and APIs the persistent non-blocking multiplexed connections with binary transfer might make a big difference?

I think Instart Logic uses the same kind of model in between their proxy nodes called IPTP (inter-proxy transport protocol).


Those page load improvement numbers seem ridiculously good (factor of almost 2 versus HTTP 1.1). Are they really expecting that to hold up in real world cases?


You see nearly 90% of this improvements already for years with SPDY. SSL with HTTP 1.1 was really slow before Google, FB etc. starting using SPDY years ago. So comparing HTTP 1.1 with SSL to HTTP/2 with SSL seems legit for me.


I thought HTTP/2 was going to be SSL only? Or was that other protocol?


SPDY only runs encrypted. The HTTP/2 specification allows it to run without encryption https://http2.github.io/faq/#does-http2-require-encryption But most of the browser vendors will only allow to use it with encryption (see footnotes here): http://caniuse.com/#search=http%2F2


Note that the caniuse data is slightly wrong. All browsers only support HTTP/2 over TLS.

(there's an issue to fix that on the caniuse github repo: https://github.com/Fyrd/caniuse/issues/2098)


In their demo[1] it is 20x faster for me, I had to disable http pipelining for it to work correctly (not sure why, but http/2 became a lot faster after I had disabled pipelining).

Minor nitpick: I don't agree with the way they calculate the percentage, if it takes 5% of the time then it's 20x (i.e. 1900%) faster, not 95%.

[1]: https://www.cloudflare.com/http2/


Isn't it interesting that even today, after Microsoft Research showed that pipelining could be almost as fast as SPDY and when activating it in Firefox is an about:config away, people still refuse to include it in any tests?

Google never showed any results vs pipelining. They just said "head of line blocking bad" and "one TCP connection per user good" (for tracking) and people just ate it up without evidence because, I suppose, they viewed HTTP/2 as conceptually simpler and more elegant. Nevermind that HTTP/2 didn't address any criticism that PHK had... that's ok because Google was just going to do it anyway.


If anyone else wants a bit of history on pipelining wrt Netscape/Firefox etc:

http://kb.mozillazine.org/Network.http.pipelining

And in particular (from links in the above):

"Bug 264354 - Enable HTTP pipelining by default Status: RESOLVED WONTFIX" - in particular one of the last comments in the thread: https://bugzilla.mozilla.org/show_bug.cgi?id=264354#c65

And:

"Bug 395838 - Remove HTTP pipelining pref from release builds Status: RESOLVED WONTFIX": https://bugzilla.mozilla.org/show_bug.cgi?id=395838

My general impression is that there were a few issues on Windows, in particular with "anti-virus software", and some problems with broken proxies -- as well as a handful of issues with hopelessly broken servers.

Additionally, it appears SSL/TLS latency was never really considered (not explicitly stated, but there appear to be implications that on "fast networks" http is "fast enough" that pipelining makes little difference) -- in other words it does indeed appear that just enabling piplining as the web moved from plain http to TLS, would've sidestepped most of the need for HTTP/2...


Oddly their demo shows Safari 9 (el cap) as not supporting http/2, but only spdy. Other test sites (akamai) show safari using http/2 just fine -- safari does appear (wireshark capture) to be sending the tls alpn h2 thingy.


There appears to be a weird bug we're trying to track down with Safari + IPv6 resulting in connections using SPDY when they should use HTTP/2. We're trying to track down whether it's on our side or Apple's.


Interesting! Thanks for the info.


I'm using HTTP/2. Here's some quick stats:

    # tail -n100000 access.log | grep 'jquery.js' | grep 'HTTP/1' | wc

    3,095

    # tail -n100000 access.log | grep 'jquery.js' | grep 'HTTP/2' | wc

    6,074


For me, 505 and 1947 respectively. I guess my audience is hipper than yours. :)


For comparison, I enabled HTTP/2 via CloudFlare on a dev site. Results: http://blog.adamowen.co.uk/deploying-http2-using-cloudflare-...


Just tested my side project https://www.gitignore.io and it now has sub second loading time. Unfortunately, adding Google analytics doubles the loading time to about 1.8 seconds.


at least google analytics is non blocking


Here's a small utility for checking if a web server offers HTTP/2: https://github.com/xyproto/http2check


Is it possible to use HTTP/2 without SSL yet? I tried it a few weeks ago and my browser was just downloading a 4KB file with some random bytes in it, I assume this was the server response but it wasn't clear.


Per the spec SSL is not required, however all major browser implementations require TLS to negotiate using HTTP/2.


So in theory it's not required but in practice it is? And to think it used to be IE which didn't follow specs.


One reason it's required in practice is meddling proxies, which are usually eliminated with TLS, unless you're a corporate user or a Kazakh.


Well, the specification defines two negociation procedures. You are not forced to support both, you are not even forced to support one, but then how would you make use of it? And you are also free to negociate it another way should you decide to. The spec defines the protocol, not the transport on which it is used or the way the underlying communication is established. They could have put the two negociation methods in another RFC for what it's worth, since they don't affect the inner workings of the protocol itself.


The browsers follow the spec fine. The requirement for SSL was almost in the spec, and support for non-SSL is optional for a reason.

Pushing people onto SSL is good.


II just like pushing people around in general.


Am I correct in assuming this means that cloudflare reads html to determine other files that need to be sent (css, js, images)?


An idea I heard (not cf related) was to analyze referers of past requests to be able to determine what resources a user most likely needs. They'll probably implement something like this sometime in the future, if they do it.


Push isn't supported so it's all on the browser to request the needed files.


Yet


Google's HTTP loadbanlancer and CDN have supported H2 for a long while.


Both? Yuck.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: