Back in 2006/7 I remember we'd use "progressive enhancement" to make a site work without JavaScript and then add JavaScript enhancements for those who had JS enabled.
At some point (maybe after the popularity of Google Maps?) nobody wanted progressive enhancement and it was totally cool to just ignore users who had JS turned off. It made web app development so much easier, but probably less user friendly.
It feels like JS is a hammer to fix the nail of the page reload. I always thought it was too bad that browsers choose to show the blank page instead of sending the request and just rendering the difference themselves.
I aim to develop web UIs using (mostly) progressive enhancement. I adopted several practices and developed some libraries supporting this process. As a result:
1. When I need to create something, I know exactly which data structures to use. This is determined by what is available in modern browsers.
2. I can quickly prototype solutions using plain HTML focusing on logic rather than style.
3. Since my data is by definition contained in HTML, I can easily query it using CSS queries. This makes working with nested data a breeze.
4. I factored code I reuse into generic, self-contained behaviors. Things like "when this form is invalid, this control should be inactive". There is usually very little to none page-specific code.
5. Once a behavior is written and tested, "debugging" usually involves simply making sure the page has the right attributes. I can do this by running a CSS query in console or looking at DOM. No breakpoints, no stepping, no watches.
6. The most important part: I can add one behavior at a time and the result is something that works and makes sense.
7. A lot of UI "logic" I used to have in scripts naturally migrated to CSS.
I like this process way more than fiddling with tons of page-specific "glue" JavaScript. I especially like it when I'm in a crunch, because it pre-defines a lot of the things I would have to "design" on the fly in a traditional development workflow. Also, if I run out of time I have a working (if ugly) app. If I introduce a bug somewhere in UI or run into a compatibility issues, it usually doesn't result in the entire user workflow stopping dead.
You sir (or ma'am), are a credit to modern web developers :) I love that you're using PE as a development tool and not just doing it because it's a good way to build robust websites and applications. I may have to try and sell your reasoned approach to some of my colleagues (they're not PE averse, but don't always see it as a great benefit).
This can't be upvoted enough. Making sure your site works from the ground up will improve the product, the process and the developer. Less, easier to read code, less development time, easier to maintain, more and better understanding of the subject matter. And probably better usability as well, since one took the time to think it through.
Sometime you have to cut corners, sure, but if you know what you are doing, the PE way is probably the easier one from the start.
It does make development easier, and at some point it just doesn't make sense to support any longer.
If the number of people who visit your site with JavaScript disabled is less than the number of people who visit using the Opera browser, does it really make sense to add [number of supported browsers] x [JS on | JS off] permutations to your testing workload? Is spending the time and resources on creating a non-JS site worth it, or would it better spent somewhere else (usability, accessibility, etc.)? Everything is a trade-off.
I personally feel the JS ship has long sailed. I'm more worried about the recent trend of sites not even working in Firefox anymore, just Chrome. That is definitely a bit sad.
> If the number of people who visit your site with JavaScript disabled is less than the number of people who visit using the Opera browser, does it really make sense to add [number of supported browsers] x [JS on | JS off] permutations to your testing workload?
If your site doesn't work for people with JS disabled, you're unlikely to see significant traffic from people with JS disabled.
Lucky for you I'm not selling anything. Also, like I said, it's in alpha. Almost nothing is working. I just wanted to get it in production so I could start tracking my sites.
No worries etatoby, just wanted to share an alternative.
EDIT: Thanks for your comment though, I just thought of some ways I can improve usability by using <noscript> to explain why JS is used for something.
> Is spending the time and resources on creating a non-JS site worth it
Comments like these make me wonder if any of the people saying PE is hard ever tried it at all. The whole point of progressive enhancement is that you don't create a separate non-JS website.
Of course they haven't tried it. Some thought leader told them PE was dead (or smelled funny), so they were able to write it off and go ahead with what they wanted to do anyway.
I do not understand your comment. Progressive Enhancement (and I've done it) has a lot going for it, but "easier" or "as easy" isn't the case. Thus time and resources are spent to get the same functionality.
Example: I make a form. It works without JS, so my server has to generate the form, handle the request where the form is submitted, and generate the result HTML. If I'm practicing good separation of concerns, the work is done in a service/contained library that the webserver code calls and assembles into a page. Then I go to add a "nice" Js experience: disable the normal form submit, read the form data, send the request, parse what I need out of the HTML result ("parse" = easy cut here) and replace the portion of the DOM involved. A nice, maintainable PE site. Nothing wrong with that.
But if I have the server generate the original page only, I can have JS read the form, call the service, and update the DOM. I was able to completely skip the server composing a new page. And that's for a bone-simple form, if my site has any interaction on that form before submit, each step represents duplicated effort. Checking if a name is already used? JS to call a service and add a line. Server/service absolutely needs to check that, but if we're not worrying about PE we can completely skip worrying about a nice UI generated on the server-side (which would involve re-populating the form we got, so it's not just "same HTML + one line"). And that's still the stupid-simple stuff.
To do this I need to design the application w/o JS and then find nice ways to improve it. But my designers, my PMs, they are all thinking from a JS-first point of view, so without PE when they say "do X" I can do X. With PE when they say "do X" I need to figure out how to do Y (a flow they didn't consider at all), and make sure it can behave like X. Anywhere it's hard to do so, it's on me to fix that because neither design nor business consider PE remotely important or valuable.
PE is great, and I'd love to work on more sites that use it...but I'll confess I rarely put forth the effort myself, even when I do have the space to do so in a project. Because Progressive Enhancement IS effort (most importantly time), effort that will most likely be seen by a single digit number of users and I've got a lot of stuff I want to do and a lot I need to do. Even a free hour can improve my productivity or quality for far more users if I spend it in something other than PE.
PE has very real benefits beyond just working without JS...but even those benefits rarely outweigh the time/effort costs in an industry where there's always far more work than time.
I’ve always wondered why anyone would use JS for form submission? One JavaScript error in one remote library and customers are now unable to contact you, and The only signal you get for the failure is your empty inbox.
> It does make development easier, and at some point it just doesn't make sense to support any longer.
It depends on what you're doing. If you're building an app (something which would make more sense as a native program anyway), then sure — use JavaScript. But if you're displaying text and images, then HTML & CSS are perfectly capable of handling that.
In general, I think that a good content-focused website will work equally well across all browsers. There's no need to force end-users to allow you to execute code on their machines in order to display paragraphs or images.
> I'm more worried about the recent trend of sites not even working in Firefox anymore, just Chrome.
And I'm worried about the long-standing trend of sites not even working in lynx, links, elinks, eww or w3m anymore, just Firefox, Chrome & IE.
> Is spending the time and resources on creating a non-JS site worth it, or would it better spent somewhere else (usability, accessibility, etc.)?
Even a naive approach to making a javascript-rich site render with JS turned off will have profound positive impacts on usability and accessibility, so I'd say it's still worth it.
I remember when we used the term "Graceful Degradation".
It really isn't that hard to design a functional HTML site and then layer Javascript and AJAX over top of it to enhance the experience. There are very few sites that actually need to be using Frameworks like Angular and React.
Sometimes I wonder if most developers would even be able to design a functional site using only HTML. It seems like most don't know the difference between a button and an anchor.
With modern server side frameworks it's even easier to support graceful degradation or progressive enhancement as the frameworks will handle the accept header processing for you.
However, its also important to mention, that with react and careful planing, a no-js version is basically included for free.
One might debate if a page really needs hundreds of kb of js to function, but then again, this is not much different than the classical progressive enhancement site with jquery etc. layered on and reacts server rendering can make to page easily available and working when js is disabled.
> However, its also important to mention, that with react and careful planing, a no-js version is basically included for free.
Careful planning is required of all frameworks to make a JS free version. I wouldn't say it's a feature of React or that it's free.
> One might debate if a page really needs hundreds of kb of js to function, but then again, this is not much different than the classical progressive enhancement site with jquery
JQuery was a crutch that wasn't strictly necessary and with modern browsers you can largely do without it and write pure JavaScript. Assuming you know how, which I would wager most developers don't.
> Back in 2006/7 I remember we'd use "progressive enhancement" to make a site work without JavaScript and then add JavaScript enhancements for those who had JS enabled.
I still do that on my case, a good 95% of the features I develop work without Javascript. One additional benefit is that if your JS crash for any reason, most of the things are still working.
I continue to do it as well. Web frameworks make it so easy it’s hard to justify not doing it.
There are so few web apps that actually need the SPA treatment. It’s a lot simpler to ignore the tool chain headaches that come from committing to the SPA when you don’t need it and gracefully degrade.
It’s so rare that I actually run into an SPA that doesn’t feel slow...I just wonder why people bother sometimes.
Oh, there's a function to it. The 7 year tech cycle generates huge amounts of money for everybody involved. Consultants and consulting companies get a nice influx of billable hours every time the tech changes. Software companies pay to upgrade to the coolest tech because it's relatively cheap compared to the amount of money it generates. Users like it because with each tech upgrade comes an increase in convenience and graphical design. And, in some ways, having the latest version of the hot apps is another way of keeping up with the Joneses. Everybody's happy except for the power users. They tend to get left out in the cold.
I've seen it plenty of times. Something triggers a JS execution bug that breaks the entire script. Suddenly, the site is dead. If you're lucky, it might start working after reload. If not, you'll have to wait until developers figure out something is wrong.
Causes vary. Sometimes it's a resource that fails to load. Sometimes it's a race condition. Sometimes, it's just dumb coding[0]. The more JS is put on site, and the more the site depends on it, the more likely it is to happen.
--
[0] - Like one of the large food order sites in my country that won't let you submit a form if it contains a number in the comment box - the very comment box you're supposed to use to add details like floor number. The problem persisted for the last couple times I used them (in a space of several weeks), so I wonder if they even realized they have a problem and are losing business because people see the checkout form broken without explanation, and order elsewhere.
If you have a big website with lots and lots of different kind of visitors you either have:
- Visitors with a bandwidth so low it takes a long time for the JS to download completely (so making it work in the mean time helps a lot).
- Visitors with outdated browsers where your JS will stop on some unsupported feature you forgot to polyfill (I usually add a polyfill after when I see it but in the mean time it works), that's what I meant by "crashing", it never works 100% of the time in 100% of the cases, it depends on the browser as well.
I built a website for a mid-sized health care company, and the primary goal was to communicate with as many people using as many devices as possible, not to be flashy.
With that mandate, the site (about 300 pages) ended up almost completely js-free. I think there was only one page that had js, and that was for a calculating widget.
js is great for certain things, but certainly not necessary for all the things it's used for.
Not really. I'm still "the new guy" and I'm not sure how they'd feel about me outing it. Though the consultants keep talking about submitting the site for some kind of award. So maybe you'll see it eventually.
> At some point (maybe after the popularity of Google Maps?)
I feel like there was a huge uptick in developers ignoring users with JS turned off when Angular and React started becoming popular. It's interesting to note that big tech companies, who have a financial interest in pushing websites away from progressive enhancement techniques, were the ones behind the creation of those frameworks.
> I always thought it was too bad that browsers choose to show the blank page instead of sending the request and just rendering the difference themselves.
FWIW there are libraries that work around this, e.g. turbolinks - which I heavily suspect Github is using, and is probably a large part of why they're able to support progressive enhancement.
This, so much. There's a reasonable set of common JS usage patterns out there that really ought to be replaced with declarative statements in HTML and supported by browsers, so that doing things like dynamic forms, drag-and-drop areas, page transitions without reload, notification badges, etc. would not have to require running arbitrary code.
Intercooler.js provides a preview of what this could be like, but, of course, it's layered on top of javascript since that's the hammer we have to work with.
MS actually did somethinng like this in IE5/6 in VB, but it never caught on. You could write a sort of reusable code snippet. I can't remember for the life of me what they were called though.
Probably had the word "Active" in it. Active control or something like that.
I think people expect much more interactivity now, (I think google maps was a driver and web based email as well), so it became harder to do pure back end. For example a lot of the queries I run now show a dynamically generated graph. I can do that on the back end, but much easier to use a javascript charting package. I guess I could just use javascript all the way down
> I think people expect much more interactivity now,
This keeps coming up.
Personally I keep thinking that devs and cv-driven development might be to blame as I rarely hear any customer demand anything that demands a frontend framework for their web sites (now web apps, that's another story).
Seconded. Actual users have little demand. Especially non-tech users. They just take what devs give them. Frontend stuff seems mostly fashion-driven these days, with designers copying other designers. I can't imagine any user actually asking for hero images, hamburger menus, floating headers/footers, webfonts, or making JS required to render article text.
It is fashion-driven, but that's not to say that it ignores what users want. Nobody was asking for bell bottoms or leisure suits either, but people bought millions of them. Customers will say "our site needs a refresh" or "our site looks dated", when what they mean is that, regardless of whether or not its functional, they seem old and stodgy compared to their competitors. This happens in brick-and-mortar too: restaurant or shop owners will remodel even if there's nothing really wrong with their existing shop functionally.
The real problem is that fashions are even possible on the web. This is what Ted Nelson calls the "triumph of typesetters over authors," and it's one of the great tragedies of the computer age. We could have had a real, global, hypertext system with working two way links and micropayments, but instead we got HTTP and HTML.
If your page consist solely on a dynamically generated graph, it's OK for it to not show anything with JS disabled.
If you have other content though, it's not OK to not show anything; only the graph should be missing without javascript, the rest of the page should still render.
>If your page consist solely on a dynamically generated graph
You can put data for the graph into a table, then use a script to render that table as a graph. If people cared enough, graph libraries could expect heir data in table format.
Saddly most graph libraries seem to love json data formats, which aren't logically table oriented... though if you used the json data and javascript to load the table and create the graph...
There was a viral post on HN back in the day that demoed how the web would look like with seamless page loads instead of JavaScript. The consensus then was that the blank page and spinning gifs were good UI for showing that your click has led to a major change, while seamless JS communicated a minor action to the users
> It feels like JS is a hammer to fix the nail of the page reload.
JS is useful for much more, e.g. collaborative editing, where the user doesn't have to wait for a server round-trip. JS is essential for a real-time multi-user interactive web.
I'd start even simpler. ISO 8601 for dates. Metric for measurements. One way of writing numbers (I like the programming convention, with dot being the decimal place separator). It would already simplify many things, leading to efficiency gains across many segments of economy.
Western civilization needs to prevail somehow too, yes. The tech world is still English-first IMO and the tech drives everything else nowadays because it provides enormous advantages.
This is some of the most self-centered reasoning I've seen on HN. Everyone should learn English to reduce your workload.
It's almost a caricature of the stereotypical redneck who has never left the country and gets mad when they overhear someone speaking Spanish at Fiesta, muttering "but this is America" under their breath.
Maybe some see it as self-centered, but I find it resonates with me strongly. Now that we're a global civilization, life would be much simpler of global-scale things get standardized. Like, if metric system was adopted everywhere. If people used ISO 8601[0] instead of whatever is their random way of writing time down. If people used one well-defined number format. Etc.
IMO what's really being self-centered is - when dealing with international things like the Internet - to insist on making everyone else's life harder for the sake of whatever random local tradition you grew up with.
And I say that as European with plenty of weird localized ways of writing dates, numbers, currencies, etc. Ways which I abandoned pretty much as soon as I got exposed to computing.
Mmmmm, lexicographically sortable. I prefer the RFC3339 profile, but YES to ISO-8601 for anything meant to be readable by a human. It's loony how many different formats are in use on the wire and internally in programs.
How is it self-centered though? There's not 'self' here. My profession (IT) is not the only profession that requires translations. They are required everywhere and so what I am saying is applicable everywhere. My nationality is not in question either, because I am not a native speaker. How is that self centered?
Hey, I'm actually arguing your case :). I find sticking to local customs to be self-centered, because "why should I change to accommodate other people?".
I'd say sometimes not crossing that bridge, which leads to a common meeting place, and instead insisting that everyone caters to your side is self-centered, but I just recognized that the exact same argument can be used against my position that most websites should be usable without JavaScript... Oh the irony.
For years the computing world had the position that if it couldn't be expressed in ASCII7 then 'fuck you'. Of course might makes right and all that but this goes beyond a couple of diacriticals. Finally with UTF-8 and i8n we now have a somewhat level playing field where for those that want we can exchange information.
The segregation between these language spheres on the web is incredibly effective. For instance, the Spanish and the English web rarely link to each other (and if they do it is most likely a link from the Spanish part to the English part). Ditto with many other language pairs.
A single world language is a nice thing to strive for, but I don't see why it would have to be English by default, even though that's what it probably will end up being anyway.
The French have always felt that their language is somehow special and that it should be given preference in the EU, and until not all that long ago it was the 'official' language in quite a few European countries by virtue of being the language of the wealthy. For the longest time the Dutch driving license still had the words 'permis de conduire' on it for that reason (and not as a convenient translation for any French police that might take an interest in the document).
French as a second language was the default, English a distant third. That's all changed now, but it is no different if it is imposed rather than of free will.
So if you want to go to that common meeting place that's fine, but if you're forced to go that common meeting place because a bunch of tech bozos have decided your culture, script and language don't matter because they have already mastered English because it was a requirement for them to be employed then that isn't.
Google translate! I've seen more than one conversation between people that do not speak each others language through automated translators and even though the results are imperfect and sometimes humorous it does work to the point that it amazes me. It also gets it spectacularly wrong at times, but still. So far no intergalactic wars have broken out over this.
Do you speak one language only?
Even one additional language would change your perspective here. You won't be able to translate all that properly into one language.
A people's language is a window into their culture and their being.
Each language (and even dialect) have their own insightful and delightful quirks.
I could only lamely elaborate here on own experience.
Iron law of Web development: You develop your site for tge platform that has 95% of your user base, and tell the other 5% to join the 21st century because supporting them is not worth the cost.
Back in the early 2000s, this meant you had to check your site in IE. Stragglers still using Netscape could pound sand.
Actually we use to tout Graceful Degradation back in those day. Most sites were either static, rendering information from a DB, capturing simple form input, or a combination of the three. AJAX wasn't a thing and the concept of Web Apps didn't exist.
If you were checking user agents then chances are you were doing something stupid or lazy.
> Actually we use to tout Graceful Degradation back in those day.
I don't know who "we" is, but what was touted and what was actually done were, then as now, two different things. The client wanted something fancy, was only willing to pay developers who would deliver, and (crucially) was running IE as their browser. As was most of the audience for the page. Stupid or lazy it may have been, it's still what web developers did to feed their families.
I put one of those graphics on my very first angelfire site, because all I had was IE and I hadn't checked how it would work in anything else. When people started paying me for web development, though, I started installing browsers, operating systems, and even acquiring whole new devices (wap phones) to see how my client's sites were rendering in the real world. I spent whole nights agonizing about tiny tweaks and compatibility hacks to make sure everything looked good, this was the majority of the job in the early aughts. Now, in 2018, if you stick to a somewhat conservative set of html and css you can build nice looking sites that render great in any moderately modern browser across thousands of devices. It's insane to me that web developers now would rather descend back into compatibility hell by making their sites rely on a stack of unwieldy and opaque javascript.
If you're serious about giving this a whirl, uMatrix is even better for fine-grained control. I disable everything except first-party images and CSS by default and it's like a whole new internet.
There's a bit of a dance you have to get used to when things don't work, but I've got it into muscle memory now so I don't even think about it. I tend to click around in there enabling the first few obvious things for about 3 seconds, which works 95% of the time, and for the the remaining 5% the URL gets copy-pasted into Chromium.
I tried uMatrix, and uBlock Origin advanced mode, but after a while I decided it was too much fiddling. I'm back to using uBlock in default config, along with an /etc/hosts file.
Yeah I agree it is really fiddly. I totally understand why people say "to hell with it". I only really put up with it in the first place because I was trapped with a woefully underpowered computer for a lot longer than I should've been. But I found the fiddling got easier and easier with time. I've stuck with it for over 2 years now and it's really just not something I notice any more.
Actually, when I get stuck on a different computer and I'm forced to use the same terrible internet everyone else has to use these days, I catch myself clicking around where my uMatrix icon would normally be and it takes me a few seconds of dawning horror to realise that I've totally lost control of my experience.
"No thank you, the latest celebrity diet fad isn't really my cup of tea."
"What? Where is this autoplaying video hiding?? Must be under one of these neo-popup JS disasters."
"No, I'm not interested in 17 ways that my imperfect sleep is going to kill me right this instant."
"NO I don't want to download Bonzi Buddy."
How does uMatrix compare to uBlock Origin with its matrix enabled through advanced mode? I can't tell much difference but would be willing to jump ship.
uBlock Origin doesn't have CSS or image control per domain but it's the JS I worry about!
They complement each other. I use them both. uBlock origin has my back for those rare times where I get the shits and just disable uMatrix entirely for a particular site.
Decentraleyes is also worth a look if privacy concerns form a part of your aversion to the blizzard of garbage your browser spews forth at you. It contains in-browser hosted copies of all that CDN JS junk, so if you have to turn on JS for some shared copy of jQuery that Decentraleyes happens to have bundled, you don't hit the network at all.
I've just realised I wasn't very clear about the way they complement each other - uMatrix is a lot more more configurable and a lot more fine-grained, but at the cost of taking some time to master. I believe the investment of time was worth it. I use uBlock for set-and-forget coarse control and for the excellent filter lists, then use uMatrix for customisations.
Will need to look at that! I've disabled JS ever since the spectre/meltdown CVE announcement and only enabled it for a select few sites. Now if I hit a SPA site I just close the browser tab unless it's something I really need and want. It's ludicrous how many "doc" and "brochure" sites aren't functional w/out JS enabled.
Having just one is certainly less hassle, but uMatrix gives more granular control over what's allowed by the first/third party domains than uBO's advanced mode.
Same here. In addition, if I think I will visit certain website only once, I open it in private mode and disable umatrix for all domains in the site for a quick view.
I use Firefox containers along with cookie autodelete so each tab gets a clean slate unless I've auto-mapped it to a container (where I've white-listed cookies - eg. a Facebook container where I only visit facebook.com)
Much safer to disable umatrix in the normal tabs then.
I think you can obtain the same result with just uMatrix. Block cookies by default, and whitelist them in individual scopes so that they don't follow you anywhere else. Do containers provide any further benefit over this?
My approach on iOS: install an “everyday” adblocker, and a secondary one that supports custom rulesets (I use refine). Then, tell the secondary to block all images + javascript (I downloaded the “block javascript images and css” filter, then toggled the css rule).
Next, long press the reader view button in safari’s url bar, and turn it on for all sites.
Finally, since Apple doesn’t support high contrast black backgrounds in reader view, go into accessibility, turn on the three clicks to bring up accessibility shortcuts option, and add “classic invert” to it.
As a bonus for reading this far, also add “magnifier” to it, which lets you use your rear facing camera in a surprisingly good magnifying mode that should be part of the default camera app.
I have uBlock Origin and blocked all scripts on the Twitter website to try the trick described in the article, but nothing happened. Are noblock tags displayed when JS is disable in uBlock?
The author talks about "noscript" and makes it clear that it's not a reference to the html tag but to the idea of surfing the web without javascript. I was waiting for a discussion of NoScript, the popular add-on for Firefox that blocks javascript by default and allows for turning it back on on a site-by-site basis.
I surf the web, as I have for many years, with NoScript turned on and as few permanent domains white-listed as I can get away with for security reasons.
I don't have any numbers to back this up, but my guess is that the population of people who use the NoScript add-on dwarfs the population of users that actually disable their javascript in their browser. I'm not sure how someone like me shows up in their numbers, but I suspect that I would be on the "blocks javascript" list and then on the "uses javascript" list if I am intrigued enough about the site to enable some of the domains it requests javascript from.
I don't know if web developers take that into account or not, but I suspect they don't because the number of domains I have to experimentally temporarily enable just to see some parts of some sites is getting even more ridiculous these days.
Maybe someone should design a new protocol that is built for interactivity from the start instead of one designed around static content with back-flips needed to make it usably interactive.
The web works much better without JS, sites load faster and browser takes less memory. But sadly many sites use preloaders or plugins that hide entire page content until JS is loaded. What an awful idea.
I am not a developer, I can write just html+css. For my personal website (http://mrtno.com/) I use just static pages written by hand. No JS needed, no stupid complexities.
I wonder why if I can sell my lack of skill on the market... because the skilled designers/coders are programing a web that really has a tendency to be so bloated.
I browse the web with no-script (firefox extension); and many websites won't even load without scripts. I wonder why. For many many website I don't see the need to have JS around at all.
Oh my lord, this article is non-sense. Had you gone to a website and had not been able to read anything I would say... good point. But you went to a Wordpress admin to prove your point. Lord have mercy... you're really going to hate your experience once WP switches to a JS based editor. Quick, let's try using Facebook and other apps built on JS frameworks and complain about the functionality not working. Let's try searching data tables with JS off, and let's try making ajax requests to load data over time instead all at once. It's not only in your browser it's in your phones, in apps that provide you services you can't live without.
These type of articles are a shame and are only written to instigate fights among developers. The world is using JS, it's on every major app. Get over it or make something better.
I know a guy who built a single-page web app for a blog. Really.
It takes a few seconds to load the post titles, it'll make your computer grind to a halt if you have more than two tabs of it open, it took months to write, and it doesn't do anything you couldn't do with plain HTML and some AJAX request handlers... but Brawndo has electrolytes!
I know a guy that made a model ship in his spare time. It doesn't carry any passengers or cargo, doesn't meet even the most basic safety standards, and frankly doesn't do anything you couldn't do with a canoe.
I bet it looks pretty good on his mantelpiece.
There comes a point where you have to ask yourself - does it matter that he built (what I assume is a personal) blog in framework du jour, simply he just wanted to?
The whole point of the article is that not everybody is able to use javascript, for a variety of reasons, and it makes perfect sense for apps to provide as close to equivalent functionality as is practicable.
As much as I'd love for the majority of JS to go away, there is no denying the usefulness of AJAX in web forms. Doing page submit is not a better experience for the user or the developer. That's basically the only piece that I can't see going without.
Yeah, I only want Javascript for when it actually speeds up a site. Which, from my experience, is simply:
AJAX for all form submissions
Service Worker for caching
Turbolinks[0] for page navigation
And those are easy to implement as progressive enhancements. If JS is disabled, the submit button does a regular page submit, the Service Worker is simply not registered and instead uses your web server's cache policy, and your links remain as regular hyperlinks.
Honestly wouldn't bother with forms failing over. That sounds like double the work to support a few people. If you want to participate, enable JS, otherwise view the site as read only.
Not sure if I follow? For a form submission, you'd set it up like a normal HTML form. The <input type="submit"> would do it's regular thing. If Javascript is enabled, then you'd hijack the button with event.preventDefault and do your AJAX.
If I'm client side, as well as server side checking inputs with JSON responses, there is going to be overlapping of work. Particularly annoying when carrying over field/errors across submit pages. Old school submit, check, refresh, show errors, is an ugly experience for end users.
Of course it can be done, not something I'm going to worry about as a solo operating and developing multiple ventures.
Sure, it requires some careful planning. You'd set up the html form with method="post". Then have the AJAX send the request with its content type as "application/x-www-form-urlencoded". At least with Node Express, that will have the requests handled the same.
I understand how this works, I've built a billion products by this point. It's much easier and user friendly to post the form through AJAX, get a JSON response on success or an array of error codes for each field. Dealing with error messages and passing around form information is not something I want to deal with any longer in OG form submit.
Most content specific sites (which is what we're talking about right?) and Wordpress et al have RSS feeds out of the box, if you're that obsessed with making browsing that fast surely aggregating feeds is by far the fastest way to browse?
When the statistical body of internet users who refuse to use JavaScrpit are expressed in terms of a market to whom custom (JS-free) experiences will induce them to pay money for products and services we'll begin to see businesses create JS-free experiences for them.
Are there marketing-specific attributes of anti-JS web users?
We simply get information on what stuff to buy from websites that work and ignore those that are broken.
My mobile browsing experience has been immensely better since I installed Brave and set it to block everything by default, including js.
When I'm presented with a blank page (not very often) I evaluate in a split second whether the several seconds needed to reload the site with js enabled are worth its content.
90% of the times the answer is no, so I click Back and then try the next search engine result for the same query. 10% of the time I really want to see or use that particular website, so I enable js for it on Brave's panel and wait for it to reload.
Summary: 1 additional click and 1 additional reload (to be only paid once) on websites I care about. ∞ less nuisance on everything else.
"I Used The Web For A Day With Javascript Turned Off."
I Used the Web For A Decade With Javascript Turned Off.
I have been using the Web for over a decade via software that has no Javascript capabilities.
There is a conception one can detect in this article that there is only one "working" look for any website: the look that the designer intended.
However IME many websites "work" without ever engaging the "look" the designer intended. To discuss this, one needs to first specifically define what it means for a website to "work". I am not sure agreement can be reached on that issue.
It may be that the definition of "work" varies with each user. Different users may want different experiences. I know what I want from websites: data. I am less concerned with design. This may not be the case for another user.
The author cites Amazon as an example website.
When I am only browsing Amazon I do not use a "modern" web browser nor Javascript. Without a browser nor Javascript I can download product data in bulk, convert it to CSV and import it into kdb+ for easier searching and viewing. Without a browser nor Javascript I can download images of interesting products and embed them into a simple PDF for offline viewing, using pdflib.
When I am ready to purchase, then I might move to a "modern" browser with Javascript-enabled.
Other example websites he gives are Wordpress, Github, Instagram, Twitter and YouTube. I read the content of all these sites without ever using a "modern" browser nor Javascript. If I want to view images or video, I can download them as in the above case of Amazon. I may choose to view the data in whatever application I choose on whatever computer I choose on the local network, and the computer need not be connected to the internet.
For this user, these websites "work" just fine without Javascript. I can always use a "modern" browser with Javascript if I want to see what the designer intended. However this is rarely necessary for the experience I seek.
I do that every day with uMatrix[1], only overriding hosts when needed. I also turn off all CSS animation with Stylus.[2] The Web is becoming unusable without that.
The referenced site BlockMetry.com doesn't seem to go into how it calculated 0.2% as having JS disabled. Especially since it references Tor traffic as a high percentage my guess is that each connection was coming into the site with different fully-wiped Http connections basically disabling tracking. Alternatively what percentage of that traffic is simply cache bots for search engines, etc...
i want to invite others to try to visit Yelp on mobile with JS turned off. first off, you canr access anything. then the website provides nonsense reasons for why a user should have js turned on, and tries to guilt the user into doing do.
i always have js off by default, and turn it on momentarily when i feel it necessary. i do not feel it necessary for yelp.
For perspective that's approx. a quarter of the size of the original DooM. And it's probably minified. And it does little more than fetching info from a couple data sources, formatting and showing it.
Honestly, I don't see the wordpress backend as a relevant test case. Whoever else you might be catering to, it's fair to assume that someone building a website would have JavaScript enabled while developing.
I'm glad he can code his brochureware site with minimal impact when JS is disabled. It is 2018 and every site is driven by JS, not just DOM events. If you disable JS, you know what you're getting into. Do we need to develop for those that decide to block CSS? Insist on deprecated browsers? You develop for your audience.
What do you mean, " Do we need to develop for those that decide to block CSS"? Surely, you write semantic, standards compliant html and thus your site works just fine without css? If it does not, have you ever heard of accessibility? If that's no argument for you to do it right, search engines also like clean, structured html. Of course, it also helps with the javascript integration. ;-)
Somebody should create a list of JS free sites. They can sell it at Whole Foods in the "medicine" aisle. You guys know what I'm talking about; "JS Free -All Natural!".
I think we've about reached the point of ideology on this. If you are considering arguing with the other side your probably better off arguing evolution with a creationist, or facts with the GOP.
Well, I'm not sure if it was meant to be helpful, but I would like a "list", or better yet, a search engine, that excludes all sites with javascript. If there is no such widely used search engine, then sites without scripting can't thrive.
I've never been an ideological seeker of "organic, all natural" type foods, because the criteria tend to seem arbitrary and unscientific, but I've come to see the label as useful because the other stuff is increasingly adulterated with garbage I don't want.
I appreciate you engaging on this in such an honest fashion. I meant that to be a bit tongue-in-cheek and was evoking Bill Burr perhaps a bit too much. TBH though, I was very tired and too hostile with my post.
In my defense though, this whole conversation feels like a time loop on HN every time it comes up. It's not hard to predict/categorize the arguments and arguers at this point, as a generalization.
You have a vocal group of progressive enhancement cargo-culters who are a decade out of time; the very real motivators for the formation of the dogma missing for years. For them, progressive enhancement is "the way it should be". Then you have a group that know the ship has sailed shouting down anyone who won't join the new world order, regardless of whether they have significant value/cost reasons. Both sides talk at each other.
On the fringes there are the hypermilers and the data analysts. There is a small contingent of the equiv to hypermilers; they know it's rather inconsequential, with no notions confusable with ideology, but it's more of a game or hobby to them. The data analysts actually talk about the numbers of people still blocking javascript and whether it's worth the effort in 2018 for the average site.
At this point it's more a social study than an insight into the cost/benefit.
>If there is no such widely used search engine, then sites without scripting can't thrive.
Such a search engine would never be widely used, because there's no widespread desire to limit search results only to sites which don't use javascript.
It's a chicken and egg question. People don't know they want it, but they can't find out they want it either. I don't mind javascript or ads per se, but the bandwidth and processing requirements of modern ad-heavy web pages are becoming unmanageable for me. My breaking point is where, even using WiFi at home all the time, I am still going over 1GB a month for my cell usage away from home. And equally as important, pages are infuriatingly slow on both laptop and phone, even though I have a relatively recent and high powered phone.
But unless people are specifically searching for sites that don't use javascript, a search engine that excludes javascript is just going to return poor results for most queries.
A more useful search engine might be one that shows whether or not a site uses javascript, and if so, which common libraries, etc. Maybe even whether or not a site will render without javascript.
I like that he offers solutions but honestly, more people are vision impaired than have javascript disabled so instead of doing all that work instead put it first on helping people with disabilities.
> There is a danger that more and more sites will require JavaScript to render any content at all.
What danger, exactly? While some sites that don't require a lot of js is nice you will soon notice that many sites you want js on. Music streaming services is one example.
I build api based sites nowdays because of mainly two reasons:
1. They feel more rapid after initial load and gives the user a better experience.
2. You can use the same api for web and mobile experiences.
I personally think that WebAssembly is the future so the web will finally just be another compilation target and I can't wait for it to be wide spread.
> I personally think that WebAssembly is the future so the web will finally just be another compilation target and I can't wait for it to be wide spread.
Hmmm, I'm not sure about that. I've only done toy projects with WebAssembly, so I could easily be missing something, but it seems to be:
1) Overkill for most websites
2) Significantly more complicated than current web development - which is saying something given the explosion in complexity of recent web development practices
Obviously #2 can be lessened as time goes on, but it's hard for me to imagine #1 ever being false. It's great for performance-sensitive applications like games, but it doesn't seem to offer much benefit for CRUD apps or static sites.
Maybe it's overkill, but hardly anymore than the V8 engine is for most websites. Compilation to WebAssembly will be at zero or little cost to the developer, so it shouldn't matter either way.
> I personally think that WebAssembly is the future so the web will finally just be another compilation target and I can't wait for it to be wide spread.
I really hope not. I don't believe we've seen the last of exploits like Spectre & Meltdown. Allowing anyone in the world to execute code on the same machine whose memory contains your bank accounts, your private memos, your passwords &c. will, I hope, come to be seen as riskier & riskier as time goes on.
I think your fears are a bit misguided. WebAssembly runs within the same sandbox as JavaScript code. I don't see how this would constitute a _higher_ security risk than browsers as they stand today.
WebAssembly isn't a higher risk than JavaScript — what's a higher risk is increasing the use of JavaScript and/or WebAssembly.
I fear that in the not-too-distant future web pages will simply be blobs of WebAssembly which paint UIs within a browser, and general-purpose computers will have become TV terminals — but TV terminals which allow the modern equivalents of networks & newspapers to steal one's private information.
>I fear that in the not-too-distant future web pages will simply be blobs of WebAssembly which paint UIs within a browser,
Some will, probably, but there's no reason all web pages will go that way. "App" type sites almost certainly, maybe streaming services, but the benefit just isn't worth the cost for most sites.
It didn't happen with Java, it didn't happen with Flash, it didn't happen with Silverlight, it didn't even happen with javascript even though it's possible to publish an entire web UI as an obfuscated "javascript blob" that writes on a canvas. It hasn't happened now that most people surf the web on walled garden Android phones, arguably the antithesis of a "general purpose computer" and closer to the "TV terminal" you described. Why is Webassembly the straw that will break the web's back?
If almost no one actually wants to build the dystopian scenario you're afraid of, and doing so would probably be impossible, why be afraid of it?
At some point (maybe after the popularity of Google Maps?) nobody wanted progressive enhancement and it was totally cool to just ignore users who had JS turned off. It made web app development so much easier, but probably less user friendly.
It feels like JS is a hammer to fix the nail of the page reload. I always thought it was too bad that browsers choose to show the blank page instead of sending the request and just rendering the difference themselves.