For a while I worked on and made use of a very lightweight JS framework that would essentially do two things:
* Intercept link clicks and form submissions and perform the exact same request asynchronously (AJAX)
* Wait for the server to respond with the parts of the page that need to be swapped-out (a JSON map of section IDs to plain HTML content) and, as expected, swap out those parts of the page.
This works REALLY well for, like, 80% of my use cases. With a little extra flavor functionality (such as automatic loading indicators and event listeners) I can enable like 90% of interactivity. The remaining 10% consists of things that affect only temporary state and don't require a server round-trip (e.g. a "select all" checkbox), and those I would handle with more traditional jQuery DOM manipulation.
For the most part, once I had this basic system in place, I could rely on the server to render (and re-render) all of the HTML and rarely had to write any custom Javascript. I was essentially relying on the behavior of vanilla links and forms. The site would even continue to function if you disabled Javascript, because the server would see that it wasn't AJAX and just render the whole page instead. The downside was that preserving local state (e.g. partially completed form fields) was tricky whenever I had to make a change to one of its parent containers.
Everything I learned about this approach eventually led me to appreciate what React.js is doing. I've been using it and I may not like all of its API or how heavy it feels overall, but the cycle of rendering and re-rendering different parts of the page feels very natural and is far easier to debug than most JS-heavy front-ends.
Why just don't let the hyperlinks do what they are supposed to do, which is what you do at the end of the day anyways, then?
When I approach a link, I hover, see the href on the status bar, think and decide, and click. Thereafter, I expect it to take me somewhere else, while the browser pushes the url of the page I left into a stack, the history. This implemented and working in my browser already, why redo it with JS? To lower the load time? Then don't make the page multi-meg already, no?
The ugly thing is govt websites and such are adopting a similar style of web app design, relying upon wizardy and tricks and hacks, for example, I sweat blood when I use my uni's web pages because I'll hit a bug in the js for an already-there popup or a button and it'll hurt my educational career.
One thing I'm not sure I made entirely clear. Whenever my JS library intercepts a link or button click, it will update the URL bar with whatever the link or form would've brought you to, so that at any time if you refresh the page it will result in the same state that the AJAX gave you. Also, the approach is designed to gracefully degrade if JS is disabled, because the links and forms really are just links and forms.
> why redo it with JS? To lower the load time? Then don't make the page multi-meg already, no?
I hear you, and often I let links do their full behavior, no JS magic involved.
But beyond a certain amount of interactivity, the AJAX intercept approach does add a few things beyond just load time:
* Even if your payload is very small, the HTTP round trip can make the interaction feel slow, especially if your browser has to go through the full load+render routine. Things like images and sometimes even the whole page will flicker. When all you need to do is update a very tiny portion of the page, having a thin JS layer on top of your usual link/form element can be a significant UX improvement.
* Say you have a page with a few interactive buttons/links and a form with text input. Without any AJAX, if you click a button or link (that isn't the form's submit button), it will wipe anything you had entered into the text field. Not a great experience.
> Whenever my JS library intercepts a link or button click, it will update the URL bar
The browser does this already.
> Even if your payload is very small, the HTTP round trip can make the interaction feel slow, especially if your browser has to go through the full load+render routine. Things like images and sometimes even the whole page will flicker.
Browsers are very smart these days, caching many things. And if the bloat is eliminated (unnecessarily large images [make thumbnails], custom widgets [use what's there], etc.), if you webpage is not the whole of the Emacs or FreeBSD manual, it'll load in about a second or so at worst. The progressbar will indicate that it's loading, so the user will know that the server is live and he's not standing there waiting to see a HTTP 50* or 40* or a lookup error.
> Say you have a page with a few interactive buttons/links and a form with text input. Without any AJAX, if you click a button or link (that isn't the form's submit button), it will wipe anything you had entered into the text field. Not a great experience.
Also an inexistent experience. Most browsers do retain the contents after navigation, both ways. I use xombrero and it does. Chrome too, I just tried it.
If downloads of your web pages are >1sec, check your web application, web server, proxies, asset file sizes, loading order, internet connection, status of hosting, etc. Use default widgets on your forms and don't fiddle with their default behaviour. Otherwise when everybody puts out their shiny new idea, the users become timid to click anything. But I guess if all the websites were like this thousands of front-end devs and web designers would lose their jobs, and thus we have all these websites.
I guess my overall point here was that you can build a sufficiently interactive webapp without a bloated, client-side, MVC, virtualDOM, JS monstrosity. And that there are widely varying degrees of complexity to these implementations.
So, on the one hand, you have Ember, Angular, Backbone, React, etc. On the other hand you can have what I described. A very thin, very maintainable layer on top of what the browser is already doing.
That was the point.
Anyway, one other thing:
> Most browsers do retain the contents after navigation, both ways.
That's not what I meant. What I meant is that sometimes you need a button or link to take you back to the exact same form, just update the page slightly. (Add an optional field, etc. Think if you are entering an "album" and need to add a list of "tracks"). Without any JS/AJAX, that "add track" button will require a bit of finagling to not clear out the rest of the form that you've already partially filled out (but aren't ready to submit). I don't know if my example is entirely clear, but it's about enabling a certain amount of interactivity without relying on more standard JS bloat.
I guess my overall point here was that you can build a sufficiently interactive webapp without a bloated, client-side, MVC, virtualDOM, JS monstrosity.
Given the fact that most of the bloated JS monstrosities you're talking about are also the most popular websites around (Facebook, Google, Pinterest), I think this assumption might be wrong.
Do you have any examples of popular, complex web apps that work the way you suggest is better? Why is it that Facebook etc don't make things that work the way you're talking about?
I think he means sites that are not as complex as Facebook etc think they need to use the same framework as Facebook etc uses where they would be perfectly fine using the pjax/turbo link style approach the op suggests
Yes, this is what I meant. One prominent example is GitHub. As far as I know they still take pjax/turbolinks to an extreme, in favor of a true client-side template/view framework.
A second is slow, and should absolutely not be the standard we're aiming for. If AJAX allows you to get closer to the ideal without messing with the typical browser workflow, why not use it?
Try browsing https://dev.to/. It feels good, no? That's the standard we should be aiming for, and it's made possible in part with exactly these ideas (see: http://instantclick.io/).
The truth is, it's all because most websites are really ads, and they have to follow the fashion - otherwise you risk sending a message that your company or project is bad in some way. Web is a fashion-driven industry and any usability or ergonomics is not even a secondary consideration.
I made a website for my uncle's tile business. Real, international business with an office in the UK. I made it with ~180 lines of CSS, ~30 lines of JS ~20 lines of GNU (1) m4, and 60 lines of make. Got positive impressions from everybody he works with and probably improved our sales. External JS included, the WHOLE website source weighs in at 150 kb without images, with multiple pages, a gallery with a lightbox and on a proper grid (github.com/ThisIsDallas/simplegrid), a custom Google Maps map via the API, a carousel, and a consistent colour scheme, all done in an afternoon. And once compiled, the product weighs less than the source.
I guess the problem is nowadays the designer folk gets to say too much on the development of websites and web standards. There is no prerequisite to become one. If you didn't study CS in depth, at the uni or by yourself, you can't do embedded, OS dev, etc., but mashing together some stuff from Github you become a web designer, no qualifications needed.
Here we have a website that talks about bloat and lists nine or ten tools that have well established, better, more generic counterparts. Generating a file from another via a filter program (minifier, m4, awk, coffeescript compiler...) is what make excells at. Replacing strings is what m4, awk and sed do since 1980's. We don't expect from the designer folk to know anything about the basics of computing. And they spend their time duplicating and triplicating and quadruplicating effort spent decades ago to end up doing the same thing, only worse, and when it comes to actual work, all they do is mix and match some libraries and add bloat after bloat, instead of looking to see what needs to be done, and doing it, using the tools already available and fit.
(1) Otherwise I'd have to implement a foreach macro, which would take some 10 lines more.
> I'm not sure I understand why designers should know CS. Most of them spend their time in Photoshop/Sketch, not programming. Unless I misunderstand.
They also write a lot of JS and CSS that face the web. I didn't want to say that they should learn/know CS, but just some basics of computing, if they're going to write code that runs on others' machines (JS).
> Also from your other comment you would like Ajax to not be used in websites?
I would like the normal behaviour of browser widgets (links, buttons, forms) let alone. New widgets are OK with me, e.g. an interactive map, a carousel, a dropdown menu. You get to define the behaviour of these tools, but a button, a link etc. has an associated meaning with them.
Really? I've never met a designer that wrote js. Engineers should be writing the software. Js web apps are essentially the same as c++ win32 ( or whatever thick client side tech ) apps with some web semantics like urls preserved. I've seen designers write css, and I think the good ones do, but I've never met one that was happy to or was particularly skilled at it.
There is a recent inclination among designers to write JS and take part in writing front-end code. Youtube is full of their talks.
JS/Hypertext was not initially meant for engineering anyting, so it's not analogous to a combination of UI toolkit & proper programming language & a compiler.
Because otherwise you get a lot of designers who know print design but don't actually know how a web page works on a technical level. That gives you awkward (or flat out impossible to implement) designs that were made without any regards to how it actually works in a browser across various screen sizes and devices.
Sounds like you are describing pjax. Ya, that's the big idea with react, single page apps with the mental model of "render everything all at once", like we did with old school late 90s/early 2000s web apps.
Oh, I didn't mean pjax was used in the late 90's if that's what you thought I meant. I meant that we did all rendering server side, because there was no xhr :-)
Good job Smudge. I found a similar technique quite helpful. The only difference is instead of asking the server for a partial I just download the entire HTML source and query it for the parts I need and inject those parts into the DOM. Super helpful for zippy photo galleries or infinite scroll.
* Intercept link clicks and form submissions and perform the exact same request asynchronously (AJAX)
* Wait for the server to respond with the parts of the page that need to be swapped-out (a JSON map of section IDs to plain HTML content) and, as expected, swap out those parts of the page.
This works REALLY well for, like, 80% of my use cases. With a little extra flavor functionality (such as automatic loading indicators and event listeners) I can enable like 90% of interactivity. The remaining 10% consists of things that affect only temporary state and don't require a server round-trip (e.g. a "select all" checkbox), and those I would handle with more traditional jQuery DOM manipulation.
For the most part, once I had this basic system in place, I could rely on the server to render (and re-render) all of the HTML and rarely had to write any custom Javascript. I was essentially relying on the behavior of vanilla links and forms. The site would even continue to function if you disabled Javascript, because the server would see that it wasn't AJAX and just render the whole page instead. The downside was that preserving local state (e.g. partially completed form fields) was tricky whenever I had to make a change to one of its parent containers.
Everything I learned about this approach eventually led me to appreciate what React.js is doing. I've been using it and I may not like all of its API or how heavy it feels overall, but the cycle of rendering and re-rendering different parts of the page feels very natural and is far easier to debug than most JS-heavy front-ends.