Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: JS library to make your website instant
426 points by dieulot on Feb 8, 2014 | hide | past | favorite | 174 comments
http://instantclick.io/

It seems the internet architecture won't bring us instant websites anytime soon. So here is a hack.

I released this JS library last month (today is version 2.0 release, fixing v1.0's rough edges), it makes use of HTML5's pushState and preloading to make a website instant.

It works like this: before clicking on a link, you'll hover over it. And there's a delay of 200-300 ms between these two events. InstantClick uses this delay to preload a page, when you click on a link it will display instantly in most case. (You can test on the website.)

That's similar to pjax and Turbolinks, for it uses HTML5's pushState and Ajax. And preloading is thrown into the mix.

In case preloading all the links that a user hover over is too much work for the server, there's also an option to set a delay before preloading kicks in, or to preload on "mousedown". Mousedown is when you press your mouse button (a click is when you release it). This way you still get a slight "magical" speed advantage, and pjax's benefits (notably, no recalculation of scripts/styles on every page change).

For mobile it preloads on touchstart, so you get 400 ms to preload. This is different from FastClick: with InstantClick a user can still double tap to zoom without triggering a link.

As said in the first sentence, the internet architecture can't bring us instant websites today or tomorrow. So I think this — InstantClick — could be a pretty big thing to improve the situation at scale.

The main challenge I see is that existing JS scripts may not work with it out of the box. I don't do much web development nowadays though, so I don't know how big of a problem that is, and I don't know if there isn't any other major barrier in the way to make this mainstream.

That's why I'm awaiting your opinion, smart folks of HN.




Another idea: take into account the movement of the mouse to define a directional cone in the general direction of the movement, which would enable you to preload your pages even before the hover state occurs.


Amazon have done this to improve the performance of dropdown menu. http://bjk5.com/post/44698559168/breaking-down-amazons-mega-...


This article was really eye opening when I read it a few months ago.

I've always hated menu dropdowns where it would close if you try to skip to the next submenu you've opened, since you hovered on the other parts on your way to the submenu.

So you'd end up having to perfectly 'trace' the menu so it won't close down on you.


That article says nothing about preloading pages. The speed gains are purely a UI thing -- if you're clever about keeping submenus open even when the mouse leaves the submenu's entry in the main menu, you don't need to provide a delay for menu opening/closing.


I think the Amazon example was being used as an example of the "directional cone", not preloading.


A few years ago, I went to a tech demo/hackathon thing. Somebody had some neat experiments with preloading pages speculatively based on user behaviour

The coolest approach given was when a user types in the search box, kick off the request for the full results page if they move their mouse on a motion curve towards one of the results, with some machine learned data on how people move their mouse: Not quite in a straight line, and with some acceleration.


Kinda overkill. ;) I may explore this idea later, but I don’t have great hope for it.


It is overkill, you're absolutely right about that. But that's the good thing about the webs, you can throw the stupidest ideas on the table and the let everyone flesh it out :D


Most times there are some few links that are very frequently clicked on; for example in Hacker News is usually the "next page" and the "threads". So if there is not one already there should be a way to specify some (obligatory?) preload pages.


I think you should utilize the prefetch behavior of the link tag for situations such as these. No need for JavaScript at all.


Could you elaborate this, please.


    <link rel="prefetch" href="http://example.com/page2" />
More info at http://davidwalsh.name/html5-prefetch


In my opinion even that is overkill. Remember, in most cases it’s already instant. The return on investment (speed/requests) abruptly declines with brute preloading.


Weird I thought about this last night and wondered if browser pruned the dom/layers based on input device deltas.


I believe I read somewhere that Amazon does this for their menus. Clever idea to utilize this kind of analysis though.

edit: @devgutt else already mentioned this. I had this page open for a while before posting.


Or just load every page. ;)


That might not be very nice for some mobile users that are on a site with a ton of links (like a news site) :)


There is no hover for mobile. Maybe track eye focus in future :-)


It's a continuation of a joke with the parent comment regarding loading every page. It would be the brute force version of InstantClick :)


Not a bad shout. Not EVERY page, but if you have a five page site, you might as well pull in all five at the time, and save the other four requests later.

The actual HTML is only going to add another few KB.


I'm sorry that I don't feel it very efficient, because sites with dynamic contents like HK or eCommerce/financial sites, very page is different from minute to minute.

But caching the skeleton of the page will work. We need browser to do it for us, just like browsers already caches the images.


Or the pages that are most likely to be requested, based on previous session data which you have stored and analyzed.


Another idea: preload all links of every page.


After looking at the source, one thought I have is that since you are dealing with such small timescales you should use the high resolution window.performance.now function (or the Date.now function for higher compatibility) as a timer instead of using the Date object as you do.


Thanks, I’ll look into it.


I wish there was an instantclick link to this website..

OK, here it is: http://instantclick.io


Realy Cool, Only real problem with this is if the clicking has side effects like: http://example.com/?action=logout as brought up on page.

And probably a ton of other application bugs as style and script stuff wont load. like they normally would


Modifying session state should be a POST, not a GET.


It ought to be a PUT since it's idempotent but still modifies state.

It definitely should not be a GET, since good practice says that GET is idempotent (http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html)

Nobody ever follows this since we all want stuff like www.example.com/articles/latest to respond to GET and also have different content depending on time. The distinction is at the core of the REST abstraction and it's an art.


Oh, are we talking HATEOAS session support?

    # if you don't have a session token yet
    POST /session-tokens # Accept: application/json
      => 200 {"token": "someuuid"}

    # no cookies set!
    # stuff the session token in localStorage instead

    # now use it
    PUT /sessions/someuuid # with your login details
      => 201

    GET /sessions/someuuid/inbox # Accept: application/json
      => 200 {...}

    GET /sessions/someuuid/inbox # again
      => 304 # wow much cached

    # meanwhile
    GET /sessions/otheruuid/inbox
      => 200 {...} # other user: other memcached prefix!

    # pages that won't change per-user...
    GET /home
      => 304 # can be cached globally
             # (use JS client-side, not edge-side, includes)

    # and finally...
    DELETE /sessions/someuuid
      => 301 /home # log out!
Basically, a session is just another resource that you create and then manipulate using its sub-resources. This makes all sorts of things easier: creating session tokens and attaching sessions to them (by logging in, or as a guest) are separate steps, so you can throttle logins at the just by throttling access to the token generator. No cookies and no ESI means caching (reverse-)proxies can actually cache. (Note that I'm presuming TLS and hygienic CORS here, so "session stealing" isn't a valid complaint. Or rather, without those things, it's just as valid a complaint for a cookie-based approach.)

Oh, but as a postscript, one little HATEOAS bonus tweak:

    POST /session-tokens
      => 200 {"token": "/sessions/someuuid"} # no constructing URIs!


Don't do this in real life. Putting session IDs in URLs is not a good security practice. (Session fixation. Users might try to share links containing session IDs. Session IDs leak out via URL history and referrers; URLs show up in the darndest places.)

Roy Fielding thinks that server-side sessions violate the REST constraint. https://groups.yahoo.com/neo/groups/rest-discuss/conversatio...

Some people (maybe you) think he's wrong; why can't sessions just be a resource? (And this just goes to show that nobody knows what "REST" really means, and that this question doesn't matter.)

But if you're willing to treat sessions as resources, there are good security reasons to use cookies to refer to them, ideally in addition to a separate non-cookie ID parameter, to prevent CSRF attacks.

Cookies have some performance benefits, too: they can save you round trips to the server. If I want to display fresh personalized session-specific content on my /home page, why should I force the client to download a generic /home page, then download my JS, then run an AJAX request, when she could just send a cookie in the initial request for /home and get back an (uncacheable) personalized response?


> If I want to display fresh personalized session-specific content on my /home page, why should I force the client to download a generic /home page, then download my JS, then run an AJAX request, when she could just send a cookie in the initial request for /home and get back an (uncacheable) personalized response?

Your server shouldn't have to do O(N) units of work to serve a static page to N people. The entire point of GET-idempotency is that you can reduce serving your home page to O(1) units of work for you: you render the page, once, and it gets cached by a CDN, like Cloudflare. This necessitates customization either on the client-side, or not at all. Doing anything other than these is breaking the web[1].

(Besides, you don't necessarily have to do an AJAX request for every previously-would-have-been-customized page. Edge-side-include type stuff (e.g. username+id, viewing preferences) is, literally, what localStorage was created to store. You only have to get that type of stuff once.)

> Session fixation. Users might try to share links containing session IDs. Session IDs leak out via URL history and referrers; URLs show up in the darndest places.

As I said, proper TLS and CORS (where things like adding "noreferrer" to all external links is as much a requirement for proper CORS as not loading external images) generally takes care of this.

But you're right, there is a use for cookies. It's not to hold your session token, though. Instead, it's to hold a client fingerprint token, given to the client the first time they speak to you.

A client fingerprint is anything that authenticates a session token. If you say "give me /sessions/foo/..." and you don't send foo's associated fingerprint, the server 403s you at the load-balancer level.

Session fingerprints are already a common concept: some people use the user's IP address, or their browser UA string, or something, as a fingerprint. These have a pretty horrible UX, though, because these things can change unintentionally (e.g. a mobile connection switching cells.) But users expect that clearing their cookies will log them out of things, so the cookie store is a pretty good place to put these. Then, leaking a URL with the session token does nothing, because it doesn't share session secret (the fingerprint), just the identifier.

How is this different from just putting the session token in the cookie store? Mainly in that the client fingerprint doesn't represent you, it represents your machine. The point of it is to pair your browser to the server. You can log in and out as many times as you like, but it'll all happen under the same "client."

Note that if browsers actually chose to start emitting a unique, persistent, per-site client fingerprint as a header (like the iOS "Vendor/Advertising ID"), this would supplant the client fingerprint token -- but do nothing to replace the session token. They're separate things.

Note also that while the session token is part of the URL (and thus part of determining cacheability), the client fingerprint isn't. This should be obvious, but its implication isn't, necessarily: past your load balancer (which authenticates sessions against client fingerprints) cookies cease to exist; they should not be passed to your backend servers. The client, and your backend, operate on pure resources; the client fingerprint becomes a transparent part of the HTTP protocol, invisible to both sides. Gives a much stronger meaning to "HTTPOnly."

---

[1] And it's not that it's bad to break the web, really; you're not hurting anyone else's site than your own when you do this. It's just that HATEOAS gives you some really great guarantees, and pretty much everyone who throws these guarantees away finds themselves re-building the web on top of itself to try to get these guarantees back.

Matryosha caching middleware, for instance, is what you're forced to deal with when you're trying to build up a complex view within the scope of a single web request. If you instead just use a service-oriented architecture, where each service that needs information from sub-resources makes requests through the public API of the web server (the same one you want clients to use), you'll get caching automatically, because all the sub-resources you're requesting are inherently cacheable, and thus automatically cached.


TLS doesn't address session fixation. (And certainly neither does CORS or the same-origin policy generally.)

Client fingerprinting (with cookies) does address session fixation. But but but didn't you just get through saying how wonderful your solution is because it doesn't use cookies?

If you're happy to use cookies and link them with server-side sessions, then just do that. (Just don't tell Roy.)

And when you need maximum performance, don't force the client to do multiple round trips.

> You only have to get that type of stuff once.

Most visits to public-facing web pages have a cold cache. Maximizing for warm-cache performance at the expense of multiple round trips for cold-cache performance is probably the wrong thing, but only A/B testing and RUM will tell you for sure.


Something like this should be expanded into an RFC!

It would be great to not have to start from scratch of every web project (even if you you are using a library as you should you often are coming up with the urls for each action.)

Can you expand this with support for password recovery, oauth-like flows, JWT, etc?


It clearly says that GET's side effects should be the same every time it's run. It doesn't say data retrieved by GET can never, ever change.


It shouldn't have side effects and be idempotent though. At least if you are adhering to the RFC.


Curious where you get the rationale that idempotent state updates should be PUT requests and not POST.


Because PUT is idempotent and POST is not.

http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.1...


In theory yes, but I'd wager that most of the time logouts are just hyperlink (/account/signout, /logout, etc.). Unless you had a good reason to, you kinda have to go out of your way to make it a POST.


So you're saying that poorly-designed applications wouldn't work properly sometimes.


Poorly designed applications like this one? https://accounts.google.com/Logout


Yes. Imagine if everyone put the following code on their sites:

    <iframe src="https://accounts.google.com/Logout" width="0" height="0" ></iframe>


You could do the same with a POST by just running the following in a hidden iframe on your site:

  <form id="form" method="post" action="https://accounts.google.com/Logout"></form>
  <script>$('#form').submit()</script>
The correct way of dealing with this issue is to rely on CSRF tokens.


I believe that would be the point of <meta http-equiv="X-Frame-Options" content="deny">


That's like trying to duct tape your arm back on after losing a fight with a chainsaw.


<img src="https://accounts.google.com/Logout" style="display:none">

"X-Frame-Options" is used to defend against click-jacking attacks, not to defend against CSRF.


Yes.

What's your point? Google must be incapable of poor design? Everything a big company does is good?


I wanted to know what HN consensus was on this sort of thing, because it seems to me this is not something that inexperienced programmers do (like SQL injection).

Also, full disclosure: I work for Google.


this isn't poorly designed, there are the web specs and then there's the web itself and if it works then the spec need updating, there's no law that post should be used for session state, that was just someone writing it into an RFC, the fact that GET works just fine and a significant population of web apps use GET that way makes it reality more than any spec does.


Read a bit more about how this is a problem at http://abielinski.com/logout


Sweet, RequestPolicy protected me. ;)


You got me :)

This is the very essence of the problem.


Just because using GET to do destructive things "works" doesn't mean the spec should be changed to allow it. By your logic, we should just get rid of all requests except GET.

Someone sends you a link, you click it, it loads in your web browser telling you that you've sent them $5000. That is why POST, PUT, OPTIONS, and all the other methods exist.

This is most certainly poorly designed. Just because most web apps and web app developers suck doesn't mean that the freaking HTTP Standard should be changed.

Just no... please stop.


From a security point of view this is not good. An attacker can embed the logout link wherever (e.g. send a tweet) and logout your users. As said in the parent post, GETs should be idempotent and, in particular, not change any state.


I totally agree. What I'm outlining is that it is very convenient to implement it as a simple anchor tag, hence that is what you usually see in the wild.


Not the libraries' fault if you do something stupid.


That's what tokens are for but I agree that POST is the way to go.


I feel using a library like this would constitute a good reason.


It is more than often not. Google mail is one example.

But i do concur.


You can blacklist these links (or whitelist all the other links).


Would it be worth blacklisting logout by default?

Maybe blacklist any links with the words logout, delete, etc anywhere inside the a tag or in the attributes?


No, my philosophy is to not do anything that could get into one’s way. And that’s a false sense of security.

If one is worried about preloading links that triggers an action, there is a whitelist mode to enable links as you review them.


Could this be used on every website if it were a browser extension? If it were, I would expect Logout links to be disabled by default.


Awesome work! I just installed this in my own new experimental (read: very low traffic) web app: http://www.penngems.com/

I set the preload to occur on mousedown rather than mousover, as per the docs, but even with this I noticed near-instantaneous page loading.


Thanks for the feedback.


Just testing that site (thanks to maxucho for providing the example :) ). Interestingly, even if I hold my mouse button down for a second or two before releasing it, there's still a perceptible loading time of maybe 500ms - 1s.

Not sure if that's expected as part of the design?


Definitely not. May you tell me how old is your computer, and which browser/OS combination you use please? Either here or on GitHub: https://github.com/dieulot/instantclick/issues

Also please test how it goes when you press for like 5 seconds.


Hmm that could be just an issue on my site, not necessarily with the plugin. I've been experiencing some strange load times with it even before using InstantClick.


One way I see to move this forward in websites at scale is to run a test where you find out the percentage of hovers that result in a click. Suppose its 90% - that means that 10% of those hovers result in fruitless busy-work for your server. Multiply bandwidth + server cost by 10%, and compare that amount to the amount you'd be willing to pay for near-instant load times.

For many companies (Facebook, Twitter, etc) the desire for instant user gratification is paramount, so the push toward instant browsing experience is a very real possibility. One problem is that most people wouldn't really notice, because these websites load pretty quickly as it is.

One interesting direction is if there was some kind of AI in the background that knows what pages you're likely to visit and preloads them - Facebook stalking victims would become an instantclick away.


Yep, I’d like to get statistics on the additionnal load.

For answering your second paragraph, note that even small gains in speed have a direct effect on user engagement. Google and Amazon have noticed so: http://www.uiandus.com/blog/2009/2/4/amazon-milliseconds-mea...


By the way, if you don't want to listen to mouseover, merely listening to mousedown takes 50-70ms off loads [1]. Not ignorable.

[1] https://github.com/rails/turbolinks/pull/253#issuecomment-21...


You’re right. And InstantClick does that too. ;)


Really awesome. I was working on something like this myself, but using Jquery ajax combined with history.pushState for partial page loads. This is much better!

There are a couple things that I had on my TODO list that could be handy though:

1) Caching - if you hover back and forth over two links, it will keep loading them every time. Dunno whether this can be alleviated or not.

2) Greater customisability. It'd be great if I could customise whether it was a hover or mousedown preload, on a per link basis. Some links benefit from hover, others it might be overkill.

3) Lastly, it would be cool if it could link up with custom actions other than just links. For example, jquery ajax loading a fragment of html to update a page. This is probably lower down on my priority list though, as the full page prefetch works remarkably fast.

Keep up the great work!


Thanks.

Caching can be done server-side, this is not something I plan to implement.

The last two seems like things that wouldn’t be needed very often, so they probably won’t make the cut either.


Yeah, I didn't know whether it was worth it on the caching, I'd already turned on a 5 second cache-control just to eliminate this kind of quick back-and-forth mouseover on a list ;)


While interesting, I think this kind of functionality should be implemented only by browser developers and should be turned off by default. Really, I can wait 1 second until the site loads. What I don't want is some library accessing sites without my permission. I usually place mouse over links to see what URL it points to and I sometimes do not wish to click.


It works only with links from the website you’re on. When the domain or protocol is different InstantClick don’t (and can’t) preload it.


I don't get the "can't" part. Surely you can load it as an image or iframe?


Sure, I meant “preload” in the InstantClick sense: getting it ready to replace the DOM.


>What I don't want is some library accessing sites without my permission.

Well, in a way all sites already do that by embedding resources from different urls.


It's no different than a single page app eagerly fetching data...


I have an even better hack. Since most blog posts / articles are nothing more than a bunch of text, I simply download all articles in a single fetch when the initial page loads. I do this using a CouchDB view that returns all blog posts in chronological order. All successive link clicks don't hit my server (unless there's an image in the article that needs to be loaded). Check it out: http://pokstad.com



It's still under active development ;)


What will you do once you get more than a handful of articles?


My thinking is that it takes a LOT of text to equal one good size pic. When using CouchDB as a backend, you want to optimize your queries so that you retrieve as much as possible in the fewest number of requests. With my current list of articles, I have this much data being requested in a single request to my map-reduce view: http://pokstad.com/db/_design/blog/_view/posts_chrono?includ...

That's about 33KB for 9 blog posts (not including attachments). So about 4KB per post. I could have a hundred articles and it would still be under half a MB. To put things into perspective, the cover photo of my pup is about 818KB. There are also numerous resource files, like JS, HTML, CSS that needs to be downloaded before the site will even be ready.


I agree. I am building something similar for a client but I am using node+mongo+backbone. I fetch the 'entire' site sans images and navigation is instant except I place loading anims where an image will show. Obviously not an architecture that suits all needs but very useful for a wide range of semi-static sites. In a way I feel this architecture is like the new 'flash': let the users wait for a bit while everything loads and then let them navigate very fast. (I also use require so some views may need adittional async loads like html templates or js but they are soooo small that the extra ms don't matter)


We're going to see this type of architecture more and more. Everyone keeps talking about this C10K problem, but the truth of the matter is that most consumer's computers are idling 99% of the time. The server only needs to be there as central storage and as an authority to authorize certain transactions.


Prefetching really shouldn't be blindly applied to everything as users may have bandwidth limited. Even though your implementation is better than browser prefetch on users it does take the choice away from the user unless individual sites make it easy for users to opt out.


I think that if you stop hovering fast enough and the request is cancelled, you’ll end up only sending headers, thus not affecting bandwidth that much. Not sure though.

If not the case, note that it’s just HTML we’re wasting. In the grand scheme of things it seems to me it would not have that much of an impact on bandwidth usage.


I think it may impact on the bandwidth usage on mobile significantly.


On mobile you don't have a hover state so surely this would be less of an issue.


Mobile as in PC over cell network you do.


Doesn't Chrome already do something like this?


You may be talking about this: https://developers.google.com/chrome/whitepapers/prerender Or this: https://developer.mozilla.org/en-US/docs/Link_prefetching_FA...

The main differences are that InstantClick preloads just before you need a page, so you can be more liberal in what you preload. Also pages are more recent, useful if you’re serving dynamic content.


This is very cool!

One interesting reaction I had: things loaded so fast that I didn't notice one of the page changes and thought it was stuck. For sites like this one where different pages look very similar, maybe it could be worth experimenting with some sort of brief flashing animation (to make it look like a real page load)?


Yes, I was thinking about something in the like of NProgress: http://ricostacruz.com/nprogress/ If the page is displayed instantly, show a complete bar that disappears. (I’m not sure if that would solve the problem, I may explore different solutions.)


Note that the author, Alexandre Dieulot, opted generously to release this under the MIT license (thanks buddy).

https://github.com/dieulot/instantclick/blob/master/LICENSE


i still don't understand why in 2014 it's not possible to have an entire website with all it's files zipped and shipped as it is on the first request. how wasteful is it to have 50 requests for a server just for images and resources? have your root domain be a zip file of everything you need to view it, and then include some additional popular pages along with it. it can't get any faster than that


Well, why not? You can inline everything (including images with data:// URIs) and zip it up. Apache can serve the static zip file to browsers that support it http://stackoverflow.com/questions/75482/how-can-i-pre-compr...


> how wasteful is it to have 50 requests for a server just for images and resources?

On the other hand, how wasteful is it to download tons of content that would be useless because the user might've navigated away from your site already?


Cached resources can actually make it faster then the packaged model you described.

For example I go to site A which uses jquery from the google cdn then I switch to site B which uses the same cdn. The browser doesn't even make a request for the jquery resource then, it just loads the cache. If the full site was loaded as a zip file we'd be downloading redundant jquery data twice.


It's coming. That's what HTTP/2 PUSH is for.


I'm not sure it's working for me, I don't see any special network activities in Firebug while using the website.

Also, you should take into account the focus event of links, I tried and it seems you doesn't when trying on the "click test" page to tab tab tab on the test link and then hitting enter.


Focus isn’t used a lot, so it’s not worth the extra complexity in code.


I don't know how instantClick is implemented, but to me attaching some callback to a second event is not added complexity, it's just a few duplicated lines of "administrative" code.


Nice! I used pjax for a Chinese-English dictionary project, and it was nice, very very fast.

As you mention with JS scripts not working, I had to do things like rebind functions when pjax finished, or load new JS snippets along with each HTML (page) snippet. Not too huge a compromise.


Isn't this what link rel="prerender" does? https://developers.google.com/chrome/whitepapers/prerender



I would be hesitant to rely on mouse input, or even touch input. Think about things like screen readers and accessibility and you'll quickly learn there are many ways people browse the internet.


It should degrade. No mouse/touch: just pjax, without preloading. No JavaScript or no pushState: standard hypertext behavior.


The tricky thing with all of these (pjax &c.) is that by loading with JavaScript, you lose progressive rendering, so while reducing latency you may actually lose perceived speed.


Correct, this can actually be detrimental for low-bandwidth users, or if the website have enormous pages.


So this is like a fork of TurboLinks? I've made this thing myself for website I use in couple of minutes. I would probably not rely my whole website on this plugin.


Is there a demo page? I want to see what it feels like.


The website it is.


Good stuff, but I don't think this website is big enough to demo the benefits, maybe a heavier website is needed. A properly configured web server as well as the right HTTP headers will give you the same speed, perceptually.

Also you might want add some kind of tracking to avoid making repeat requests (probably within a timeframe). For e.g. hovering one link, then changing your mind, then going to another link. The browser will smartly pull it from its cache but this lib is still making the requests.


I just added it to my database driven website that is run off shared hosting: http://coloradocsas.info/

It doesn't feel that much faster, but I'd love a way to verify speed changes with data.


I'm watching via the dev console and don't see any requests being sent out on hover. Are you sure it's working?


I took another look and I wasn't doing the init(). I am now, but it still doesn't appear to be working. Not sure why.


You’ll need to enable blacklisting, see http://instantclick.io/start.html#whitelist_blacklist

Sorry, my copy was previously unclear about that point.


I figured as much, but please add an additional demo part to your website, something like 10 interlinking Lorem ipsum pages. Be sure to also add a few images on some pages (but not on others).

I had already viewed a couple pages (to read what it is about) before I started looking for a demo, and by then I wasn't sure if it was my browser caching or your script that made the pages load snappy.

Maybe also provide a 10-page Lorem ipsum "before" example, clearly marked that it does not use your library, to demonstrate the difference. I can't really tell if maybe your server is just really fast.


Wow cool!


> before clicking on a link, you'll hover over it.

unless you use vimperator or similar. the demo handles this though, giving a hover time of infinity.


Does it work with SPA, particularly using AngularJS? (Essentially what's needed is the "prefetch on hover")


I haven’t tested, but I don’t think so. Creeping in the standard browser behavior is easy enough, but an SPA may likely need to reimplement InstantClick’s logic specifically for it.


Yeah, just tried it. Does not work with angular SPA.


Wonder what happens for website with zillion of visitors per day. Could all this preload impact on servers?


Predictive prefetching (similar to the work here, but more aggressive) did impact the servers on a zillion-visitors mobile commerce site at first, but it's fine now. Tuning helps.

More than scaling, the larger headaches come from migrating all your existing logging to account for preloading versus impression hits.


Oh, good point. CPC is based on the user clicks, i.e., fetching.


Yes. I wonder too what the additionnal load is, I don’t have numbers for now.


I work for a gazillion pageviews website. I'm discussing this library with my team I'll let you know. :P


Thanks, much appreciated!


Am I correct in assuming that touch interfaces can't benefit from this kind of architecture?


You are. Preloading on touchstart is something that I plan to work on soon though.


It's not really a hardware limitation. The Galaxy S4 can detect an almost touch but there's no browser event for it.


Wouldn't this ruin usage stats?


Careful. Usage trumps usage stats. If the user experience is better, the stats will adapt.


It doesn't trump the ability to record stats. When it comes to business there are certain things you can't do without, for most sites (that are big enough to matter) stats are absolutely required, no improvement on usage would be worth losing stats for.

(Not talking about this project in specific as it seems stats can be made to work with it, just replying to the idea in your comment.)


Server-side, yes. Client-side (Google Analytics, etc.), it may need adaptation. See the section “Dealing with scripts” at the bottom of http://instantclick.io/start.html


I haven't read the source for the linked library yet, but could you just have javascript in your pages that hits a /gen204 page on your site when the DOM is ready ? Then the preloaded page will only hit the /gen204 if it's actually rendered. Then just count the 204s instead of 200s.


I have problem understand "instant website",

can you provide some specific definitions? thank you


Have you considered preloading all of the links while the person is reading the page?


For sure. But you get way less bang for your buck that way.


Any demo? I mean, a implementation in a real web, like a blog or something like that?


Unfortunately none that I know of.


Very nice. It would be great if the JQuery Mobile folks would integrate this.


Good idea. Pre-Loading a page when you hover a link with your finger.


Does this require server components? Or does it also work with a static site?


It just requires the 1.6kb JS file. :) So yes, it works on static websites (instantclick.io is hosted on GitHub Pages).


In theory you could write a chrome extension and use it for any site, right?


Sort of, see my comment below: https://news.ycombinator.com/item?id=7201725


Cool, and it only works on GET responses right? So I don't have to worry about hitting "Delete" or "Save" on a web app by hovering.


Right. If the “delete” or “save” button isn’t a link it won’t get preloaded.


Is it possible to make it a Chrome extension and use it on all sites?


Isn't this more or less prefetch, as implemented by a number of browsers eg chrome? https://developers.google.com/chrome/whitepapers/prerender


I think it is possible for a subset of websites (because you can’t preload all links by default, some trigger an action such as a logout), that’s an idea I had in mind.


I pasted the js code into a userscript.

Works great! I hope I won't have too many accidents with it.

I remember using 10 years ago a similar Windows program that preloaded all links.


Awesome ! I can't believe I hadn't thought of this before.


Does it support /#!/ (hashbangs) or just pushState ?


Just pushState.


I couldn't manage to see anything special.


This is really cool! I'll try it out.

Thanks for sharing :)


This is fantastic - will definitely use this!


Love it! Will it only work on html5 sites?


It works with every websites.


Flash?


Flash in 2014?


Extra kudos for not using jQuery!


>Click − Hover = ∞

>Click − Mousedown = 2 ms

>Click − Touchstart = ∞

I win!


umm like 40% of traffic is already touch, seems too late


Preloading on touchstart is planned.

Also note that percentages like that don’t tell the whole picture; non-touch traffic hasn’t decreased by “like 40%”.


Is there a demo?


The website it is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: