In terms of optimal performance for end users... I should now be hosting all files on my own server w/ Cloudflare rather rather something like Google's CDN? For example, jQuery. Reason being, is that those files will all load in parallel on my own domain, whereas for another domain like Google, it'd have to renegotiate an SSL connection and wait a bit longer?
Is this correct? Or is there more to it than that?
What I'm now doing is reducing the number of third party domains I call.
In essence, where I used to use cdnjs.cloudflare.com or whatever other externally hosted JS or CSS, I'm now mostly self-hosting, but still behind CloudFlare.
You can see this in action on https://www.lfgss.com/ which is now serving everything it can locally... only fonts and Persona really remain external.
I have been using preconnect hints to try and reduce the latency created by contacting those 3rd parties, but TBH the fact that I use SSL as much as possible meant that those connections take time to establish. In that time, most of the assets can be delivered over my already open connection.
There is an argument that cdnjs/Google CDN or whatever is better for the web, but personally I'm unconvinced. I think one should self-host/control all of the JavaScript that runs on your own site, and that unless the exact versions of the exact libs are cached in end user browsers the benefits are not even there.
This also looks to be a smarter thing to do anyway; the increasing prevalence of ad-blocking tech is impacting 3rd party hosted assets, and thus the experience of your users. You can mitigate that by self-hosting.
I haven't obliterated first-party extra domains, for example I still use a different domain for uploaded assets by users. This is a security thing, if I could safely do it I'd serve everything I can from just the one domain.
Basically: Self-host, http/2 has brought you the gift of speed to make that good again.
The first party extra domains use a different domain and .tld altogether.
A bit like how google.com is for maps and anything users upload go to googleusercontent.com.
LFGSS is served from www.lfgss.com and the user assets go via lfgss.microco.sm, and proxied user assets (another level of distrust altogether) are going via sslcache.se .
I own all of the domains, and they're on the same CloudFlare acount, but we don't yet offer ways to give users control over which domains get SNI'd together, and this is especially true when the domains are on different CloudFlare plans.
That said... it's cool. To reduce everything from 8 domains down to 3 or 4 is a significant enough improvement that I'm happy.
I think the case for hosting jQuery and the like, on external, presumably cached CDNs is overstated. Library version fragmentation and to a lesser extent, CDN fragmentation, have to be weighed against the cost of the additional connection.
For HTTP/2 it's a lot better to have fewer domains since they download in parallel as you say. I think the only advantage of hosting jQuery from a shared place would be caching between servers. CloudFlare already acts as a CDN for your static files.
Wouldn't caching be a big thing for libraries like jQuery? Its highly-likely that jQuery was used by one of the other most recent sites a user visited... why not still take advantage of the fact jQuery may be cached locally?
Ideally, CloudFlare would allow you to route specific paths to different backends so you could just aggregate multiple services within your single domain for the fewest connections and DNS lookups.
Feels crazy but this makes me think to proxy imgix, which uses Fastly (not supporting SPDY or HTTP/2 yet), through CloudFlare. I'll just set up CNAMEs on my imgix account that are subdomains of my main domain, then add them to CloudFlare with acceleration on - but no caching (since imgix serves images by user agent). This adds an extra datacenter to datacenter hop, but hopefully that's really fast and upgrading the client to SPDY or HTTP/2 would outweigh that.
Awesome! With that + HTTP/2 server push, we'll really be flying.
We already started to proxy our S3 / CloudFront assets through our load balancer so they can be cached and served through the SPDY (now HTTP/2) CloudFlare connection. However, since we're using imgix to serve different images by device, we can't allow CloudFlare to cache.
I've set up some tests to proxy Fastly through CloudFlare and my initial tests are inconclusive as to whether the crazy extra hop is worth it. It seems that if we have tons of images, it probably will be faster, but most of our pages only load about 6 images above the fold and lazy load everything else, so that might be why the difference is negligible. I'll have to test on a page where more images download concurrently to see if 1 extra hop to get SPDY and HTTP/2 is worth it.
One advantage of a service like CDNJS for a resource used by a number of unique sites, like jQuery, is that the resource will often be in the browser's cache. That value diminishes quickly if the particular version of the resource and the location from which it is served is not widely used. So, for widely used resources like jQuery, it can still make sense even in an HTTP/2 world to use a third party service. On the other hand, other HTTP/1.1 performance techniques, like domain sharding, can actually substantially hurt HTTP/2 performance.
There are way too many versions in use everywhere, not to mention way too many CDNs, for this to have much of an impact.
CF is already a CDN so it's better to just pipe all the assets through a single connection rather than the more likely chance of setting up another connection just for jquery.
I really wish Microsoft gave HTTP/2 support to IE 11 on windows 8/8.1.
Any insight as to why they decided not to support it on IE 11 for windows < 8.1 would be appreciated.
Many of our users are stuck with windows 8/8.1, or even 7 for many more years unfortunately. Some of them won't even have another browser as an option(enterprise...).
See my point below about ALPN. You need to advertise support for HTTP/2 in the TLS ClientHello and this logic is contained (on a Microsoft platform) in schannel.dll.
Microsoft first added ALPN to SChannel in 8.1 and RARELY updates this library outside of OS releases, so that's (at least one reason) why you won't see on Windows 8/2012 Server.
The reason SChannel matters is that the protocol used to negotiate which "next generation" protocol is to be used for the HTTP connection (not session, minor point) is something called Application-Layer Protocol Negotiation. ALPN is a TLS extension sent as part of the ClientHello but wasn't added to SChannel until Windows 8.1/2012 R2 Server. (There was a predecessor to ALPN called NPN that Adam Langley authored/implemented for Chrome but Microsoft never implemented it.)
I'm quite surprised that there are a lot of browsers in the wild that support SPDY, but not HTTP/2, given auto-updating. But that's what their numbers show. Maybe mobile skews this?
The caniuse.com data on http/2 appears to have some flaws.
Biggest buckets for browsers that support SPDY but not HTTP/2 for our website right now are:
a) Chrome for mobile
b) Safari on older Mac OS X versions
c) Older Chrome for desktop versions
d) Internet Explorer (small impact)
Other websites might see different ratios depending on their audience.
Stay tuned for instructions on how to gain protocol version insight for your own website on CF.
Hmm, does anyone know how to support SPDY and HTTP/2 on a nginx>=1.9.5 and which has only module "ngx_http_v2_module" build inside? What is the configuration for nginx to support SPDY and HTTP/2 ?
We developed our own patch to NGINX that allows it to support both SPDY/3.1 and HTTP/2 and to negotiate correctly. Stock NGINX allows you to have one or the other, but not both.
I'm sure we will. We open source pretty much everything we can (i.e. we don't open source stuff that's too complex to extract from our business logic).
"Before installing the nginx‑plus‑http2 package, you must remove the spdy parameter on all listen directives in your configuration (replace it with the http2 and ssl parameters to enable support for HTTP/2). With this package, NGINX Plus fails to start if any listen directives have the spdy parameter.
NGINX Plus R7 supports both SPDY and HTTP/2. In a future release we will deprecate support for SPDY. Google is deprecating SPDY in early 2016, making it unnecessary to support both protocols at that point."
Does anyone know if they support HTTP/2 on the backend side too ? They didn't with SPDY and I think it would help to multiplex connections all the way.
Not right now. We are currently experimenting with Server Push because we think it will help with the end user experience more than HTTP/2 to the origin server. You can see that running on the experimental server https://http2.cloudflare.com/
The question is... does HTTP/2 on the backend help that much. We aren't restricted like a browser in terms of bandwidth, latency or number of connections we can open. Greatest benefit for HTTP/2 is between browser and us, but origin HTTP/2 hasn't been forgotten.
I've love to see CloudFlare enable admins to utilize Server Push without extra configuration on their backend.
My ideal situation is one where I can have my webapp specify it's dependencies through a spec such as Server Hints[1], and have them be requested and cached edge-side, turned into a Server Push to the end user.
Yes, but you have to look at the mix of things being delivered to the client. For a typical CloudFlare customer you'll have some dynamic HTML delivered all the way from the origin and then a bunch of static assets delivered from our cache. HTTP/2 between us and the browser is very valuable there, less so on the origin connection.
Those page load improvement numbers seem ridiculously good (factor of almost 2 versus HTTP 1.1). Are they really expecting that to hold up in real world cases?
You see nearly 90% of this improvements already for years with SPDY. SSL with HTTP 1.1 was really slow before Google, FB etc. starting using SPDY years ago. So comparing HTTP 1.1 with SSL to HTTP/2 with SSL seems legit for me.
In their demo[1] it is 20x faster for me, I had to disable http pipelining for it to work correctly (not sure why, but http/2 became a lot faster after I had disabled pipelining).
Minor nitpick: I don't agree with the way they calculate the percentage, if it takes 5% of the time then it's 20x (i.e. 1900%) faster, not 95%.
Isn't it interesting that even today, after Microsoft Research showed that pipelining could be almost as fast as SPDY and when activating it in Firefox is an about:config away, people still refuse to include it in any tests?
Google never showed any results vs pipelining. They just said "head of line blocking bad" and "one TCP connection per user good" (for tracking) and people just ate it up without evidence because, I suppose, they viewed HTTP/2 as conceptually simpler and more elegant. Nevermind that HTTP/2 didn't address any criticism that PHK had... that's ok because Google was just going to do it anyway.
My general impression is that there were a few issues on Windows, in particular with "anti-virus software", and some problems with broken proxies -- as well as a handful of issues with hopelessly broken servers.
Additionally, it appears SSL/TLS latency was never really considered (not explicitly stated, but there appear to be implications that on "fast networks" http is "fast enough" that pipelining makes little difference) -- in other words it does indeed appear that just enabling piplining as the web moved from plain http to TLS, would've sidestepped most of the need for HTTP/2...
Oddly their demo shows Safari 9 (el cap) as not supporting http/2, but only spdy. Other test sites (akamai) show safari using http/2 just fine -- safari does appear (wireshark capture) to be sending the tls alpn h2 thingy.
There appears to be a weird bug we're trying to track down with Safari + IPv6 resulting in connections using SPDY when they should use HTTP/2. We're trying to track down whether it's on our side or Apple's.
Just tested my side project https://www.gitignore.io and it now has sub second loading time. Unfortunately, adding Google analytics doubles the loading time to about 1.8 seconds.
Is it possible to use HTTP/2 without SSL yet? I tried it a few weeks ago and my browser was just downloading a 4KB file with some random bytes in it, I assume this was the server response but it wasn't clear.
Well, the specification defines two negociation procedures. You are not forced to support both, you are not even forced to support one, but then how would you make use of it? And you are also free to negociate it another way should you decide to. The spec defines the protocol, not the transport on which it is used or the way the underlying communication is established. They could have put the two negociation methods in another RFC for what it's worth, since they don't affect the inner workings of the protocol itself.
An idea I heard (not cf related) was to analyze referers of past requests to be able to determine what resources a user most likely needs. They'll probably implement something like this sometime in the future, if they do it.
Is this correct? Or is there more to it than that?