Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Your website should work without JavaScript (2021) (endtimes.dev)
129 points by nameequalsmain on Oct 15, 2022 | hide | past | favorite | 149 comments


I used to think this. I even used uMatrix (RIP) to disable all JS. After a while though, it simply wasn't worth it. You'll have a vocal minority on HN of JS disablers, but the truth is most people don't know or care about JavaScript, much less want to disable it. So, it's really not worth catering to this 0.2% or even 1% of people as the article cites.

There are just too many interesting things JS can do. I've been playing around with Three.js and it's been incredible the types of 3D art you can create. I want to make those experiences for users which is simply impossible without JS.

However, I will also say that most of the websites I do make will work without JS because I use Next.js as a server rendering framework, plus CSS can handle a lot of things these days, such as modals, link trees, etc that used to require JS and I use those where possible.

What I will not do though is re-implement logic just for JS disablers. A particular example is using an animation library like Framer Motion that works with JS. It is somewhat possible to achieve similar effects in CSS, creating your own spring function in SCSS for example, to interpolate between values, but I'm not going to do that because it's effectively double the work for 1% of the users. Even the 1% figure is misleading for production apps if you use TypeScript (to prevent JS breakage) or modern dev practices like distributed CDNs (to prevent a package from getting loaded).


I am a js disabler who writes js for a living. "Apps" are hard without js. Not worth it and I do not expect anything complex or interactive to work. Stuff like HN and news/forums function just fine without, though. And that is great.


I agree. Javascript can do cool stuff. When the author said:

> I'd prefer to write js all day — but finding html and css only solutions has made me a better developer.

> It's forced me to find creative ways of solving problems — and to learn new html and css features.

...it made me think about how you can often do cool thing by taking advantage of some browser functionality, then you look it up on caniuse.com and it's only available on 75% of browsers. Much worse than using a few lines of JS that can't be used by 1% of users.


I'm not going to do that because it's effectively double the work for 1% of the users.

This implies that if you only did the CSS version you'd do the same amount of work as the JS version but it would work for everyone. Why wouldn't you do that? Doing only the JS version means you spend the same engineering effort but it doesn't work for as many people. Choosing to do that when you have a choice makes absolutely no sense.


> A particular example is using an animation library like Framer Motion that works with JS. It is somewhat possible to achieve similar effects in CSS, creating your own spring function in SCSS for example, to interpolate between values, but I'm not going to do that because it's effectively double the work for 1% of the users.

> > Doing only the JS version means you spend the same engineering effort but it doesn't work for as many people

Where did I say it would be the same engineering effort? I explicitly said it's more for the CSS version. In my example, the CSS version is inferior to what the JS library can already do. Creating a spring animation is only one of the things that Framer Motion can do in JS built in. If I did everything in (S)CSS I would basically be re-implementing Framer Motion, which, as I stated, is double the work for <1% of users. Why would I want to re-implement a package when it's already there for me to use? So yes, it "absolutely makes sense" to do everything in JS only.


> Where did I say it would be the same engineering effort?

Saying "double the work" implies the doubling to be total - i.e. the Framer Motion approach and the SCSS approach are each 100%, and doing both would therefore be 200% or double.

What you're saying now would be more accurately described as "triple the work" - i.e. Framer Motion would be 100%, SCSS would be double that (200%), and the total would be 300% or triple.


Yeah, I guess you could say that, so in reality it's even worse (more overall work for me) in order to reach 1% of users.


The usual suspect for js style manipulation are animations, and because css animations have all sorts of limitations where you will likely run into so it's more convenient to use the same js tooling for animations almost everywhere.


I don't know much about your particular use case, but in my experience implementing an animation in CSS results in a smoother animation with better GL acceleration in the browser resulting in a lot less CPU/battery consumption.

So. There's that.

But otherwise I'm broadly in agreement. If you're making an online application, it pretty much requires JS. Not so much for CRUD db form apps/forums/blogs/wikis etc.


>Not so much for CRUD db form apps/forums/blogs/wikis etc.

If you want decent/custom FE validation you do still need it


I assume "FE" means "form error" - personally I agree with you, but you can still make it work without it.. "progressive enhancement" where validation is still done serverside which is an absolute must even if you do use JS for that.

Also, HTML5 validation is pretty darn powerful these days. Check it out if you haven't already.


Probably "front end". But yeah, you can do a lot with just html and css.


I have a medical disorder which could one day render me blind.

I worry about how much of the web I will be able to use.

Don’t JS free web pages have better accessibility?


As long as the underlying DOM uses the correct semantics, there shouldn't really be a great difference. One can write a static page that would be less accessible than a js heavy alternative. Screen readers nowadays work with the rendered page DOM, not the raw html.


I don't care how many people there are that care about whether they are running javascript. The problem persists even if 0 people care about it.

The problem is that people are getting abused, even if they don't care about it. They are unknowingly lending their computers to unknown people who use them as their tools by running code on them. My computer is my tool and nobody else's.

If website owners think it's totally fine to run code on my computer when I visit their site without my permission or knowledge, I wonder what they would think if I ran code on their servers without their permission or knowledge. That would be fair. My computer for you and your computer for me.


Do you feel the same abuse when you download a desktop application, and it runs code?

You are free to revoke your permission for any site to run javascript, but website owners are not obligated to go out of their way to provide their services to you if you choose to do so.


> Do you feel the same abuse when you download a desktop application, and it runs code?

Not if the source code is available. Other reasons for no are that it's not automatic and invisible.

> You are free to revoke your permission for any site to run javascript, but website owners are not obligated to go out of their way to provide their services to you if you choose to do so.

People don't do that enough for it to have much of an effect.


I'm not really sure I get the point. Simply opening a browser means you're allowing the browser developer to run code on your computer, and you're giving them permission to do so by downloading, installing and using their software.


The differences between the browser I'm currently using and a random website with javascript on the web are:

- The browser's source code is available so I can see what it does.

- The browser is distributed by Debian who also distribute most other software I use so there is 1 organization to trust instead of 1000.

- The browser is not downloaded and executed and upgraded automatically, so it's possible for me to know what code runs and doesn't run on my computer.

- There are multiple browsers I can choose from. I can't choose which javascript to use when I visit a website.

- I can modify the browser if I want it to work differently (which I have actually done). It's harder to modify the javascirpt on a website because it's updated every time and very tied to the rest of the site and is often unreadable.

- Modifications can be distributed, so if I make a better version, other people can use it and forget about the original version if it was bad.


I guess I understand (and respect) the argument that you only want to run code that you can vet and easily modify. But I don't think most people feel that way - I think most people expect that they're getting some service (like a browser) by allowing developers to execute arbitrary code they've built on their device. I therefore don't think javascript is abusive in the way you describe it, because it follows the paradigm people expect and want out of their software.


> I guess I understand (and respect) the argument that you only want to run code that you can vet and easily modify

This is not about me. It doesn't matter what most people feel or what they want to do. It's the situation where people are expected to run often invisible code written by untrustable people that is wrong. People should have the right to know and control what their computer is doing, even if they don't want to use that right. The current situation doesn't allow them to have that right.

> I think most people expect that they're getting some service (like a browser) by allowing developers to execute arbitrary code

The problem is that it's too easy for developers to use other peoples' computers for their own benefit when the current situation of the web is that people automatically let random people on the internet run code on their computers. It happens all the time. That's why they should not have this freedom.


This is purely about your preferences. You're arguing for a world in which the vast majority of people give up something they really like in exchange for something they really don't care about. "Abusive" is quite literally the wrong term because the exchange (you run arbitrary code on my device, I get software I want to use) is helpful to most people, not harmful.


> the vast majority of people give up something they really like

I don't know what you mean that people like exactly but it's definitely possible to let people use software in fair ways so they don't have to give up anything. A lot of times the use of javascript is unnecessary so removing it doesn't make any difference for the user.

> the exchange (you run arbitrary code on my device, I get software I want to use) is helpful to most people, not harmful

It doesn't matter that the exchange is helpful if it happens in an unfair way. Compare this to prostitution. It's illegal (at least the other side of it) in many countries. But everyone involved gets what they want or need and the exchange is helpful to everyone, so why is it illegal? I guess it's because some people just think it's wrong and that it's an exchange that can't happen in a fair way even if everyone involved is happy with it.


Modifying websites with userscripts, userstyles, and extensions is wayyyy easier than modifying native apps IMO, especially if they aren't open source.


The problem with modifying javascript is that it is very dependent on the website so the modified version can stop working at any time. I can't write a javascript program for a website and then use it forever just like I can write a text editor and use it forever.


Any Javascript your browser runs is, by definition, available for you to see what it does.


It's usually distributed in an unreadable format and even if I'm able to read it, it may change the next time I visit the website without any notice, so I would have to read it every time I visit unless I want to risk using and outdated version that will stop working soon.


It's funny how this argument doesn't fly with people when it comes to blocking advertisements, with one man claiming it takes food out of his child's mouth.


Maybe he should consider feeding his child with actual food rather than my personal data. Probably would taste better, at the very least.


Lmao are you serious? You're specifically visiting their website. You are explicitly requesting whatever they're serving up. It's akin to complaining that a program you downloaded is gasp executing code on YOUR computer!


> You're specifically visiting their website. You are explicitly requesting whatever they're serving up.

It would be preferable if I could visit a website without having to trust them to run code on my computer. In general, it's preferable to make things as little intrusive as possible.

> a program you downloaded is gasp executing code on YOUR computer

The same is true for programs people download too. There is too much abusive code, both on the web and in other programs.

When I download a program I do it from a trusted source. The program has the source code available and I (and everyone else) have the right to read and modify that code. The web can't work like this because:

1. Websites are so many that I can't trust all of them.

2. Even if the javascript is readable, it is tied to the rest of the website and can change at any time so any modifications or rewrites will stop working.


Unless you want to go down the ultra-pedantic route that HTML is markup rather than code, why are you reading HN?


The difference between HTML and Javascript is that it's much harder to use peoples' computers for arbitrary things through only HTML. With Javascript it's done all the time. I don't know what this has to do with me reading HN but I can add that HN works without Javascript which is something more websites should do so we can get rid of the expectation that you have the right to run arbitrary script code on peoples' computers when they just want to read a page of text.


You do have the right to run arbitrary script code on people's computers. That's been a fundamental part of what the web is and what it is capable of for over 20 years. You, as a web developer, publisher, what have you, have the right to write whatever code you want, even to render text, just as you have the right to design your own page layout, choose your own fonts and colors. It's up to the user agent to decide whether or not to run the code.


Something is not right just because it has been happening for a long time.

> It's up to the user agent to decide whether or not to run the code.

Yes, it should be. The problem is that things are constructed so that that solution doesn't work because most people don't know how to do it and if you do it a lot of things break. That forces people to let others take control over their computers when they may not actually want to do that. If I let you control my computer, that should be because I want to, not because I'm forced to in order to be able to do what I need to do.


>That forces people to let others take control over their computers when they may not actually want to do that.

This is unnecessarily fear-laden hyperbole. Javascript doesn't force you to let others take control over your computer. And even if it did, for the sake of argument, then literally all code you run does the same, and likely takes far greater control than javascript is capable of.


If I need to run the code in order to do what I need to do then I'm forced to let someone else control my computer.


Then, don't visit their websites? If you want to run only things you trust, then do so. Others (both users and website creators) are free to make their own decisions.


I want to use the web and do a lot of other things on my computer. I don't want to have to avoid a lot of things just because people tend to make abusive software.

> Others (both users and website creators) are free to make their own decisions.

I don't know how much of a free decision it is when the user can't know or control what the code is doing.


This reads like satire.


I would very much like for HTML to incorporate concepts from htmx (particularly more flexible transclusion) so that more dynamic websites and applications can be built entirely in HTML.

JavaScript became popular because HTML stopped moving forward as a hypermedia, focusing instead on client-side features that, while certainly nice, didn't increase the expressive power of the format in terms of hypermedia.


I feel this way too. JavaScript is overused in part because HTML is very much lacking for the types of websites people expect and want to build today.

By sticking to a rather small selection of semantic and layout primitives with some form fields sprinkled on top, we’re forcing all the responsibility onto the individual developer. Which leads to poor performance, poor accessibility, incalculable man-hours wasted building the same things over and over.

Why isn’t there a <gallery> element that I can slot child items into and style with CSS? Or a <lightbox> element to view those items in a larger size? Or tabbed control groups? These things have existed since long before people could build rich apps on the web, and having them built into the user agent would give so much power back to the user in terms of customisation and accessibility.

It’s like HTML still lives in a world where the web is solely for documents.


Agree, but HTML itself doesn't have and doesn't need "transclusion" when HTML is understood to be an SGML vocabulary and SGML has all these things and more, from basic sharing of headers/footers and other markup fragments [1] to parametric macro expansion and event-based templating [2].

[1]: http://sgmljs.net/docs/producing-html-tutorial/producing-htm...

[2]: http://sgmljs.net/docs/templating.html

(Putting transclusion into quotes here because that term is from Ted Nelson and is possibly ill-defined in a HTML/SGML context)


> when HTML is understood to be an SGML vocabulary and SGML has all these things and more

HTML hasn’t been SGML for a long time, and it’s never going to go back. Understanding it to be an SGML vocabulary would be a serious error. In the context of HTML, what SGML supports is even more irrelevant than it was twenty years ago (and it was pretty thoroughly irrelevant even then).


MS had a great solution back in the day with their Dynamic HTML. The anti-MS community opted for what we have been left with instead of adopting what MS came up with. DHTML was pretty awesome.

https://archive.org/details/dynamichtmlrefer00redm


I'm not sure what you mean, because DHTML was really just an umbrella term for the then-new HTML+CSS+JS combo; the "dynamic" part came from the latter two.


Why you should double your workload in designing a website to support 2 in 1000 users. By said no PM ever.


Commercial site, personal blog, social media &c yes, of course, you are right

Sites people use to access vital services: you should be designing Web sites that will work on old broken devices on slow connections. 2‰ of 30 million is 60 000, a stadium full.

https://shkspr.mobi/blog/2021/01/the-unreasonable-effectiven...


... also the original article said 1%, so that's really 300 000.

the 0.2% was just the deliberate disabling. The rest were all the stuff documented here:

https://kryogenix.org/code/browser/everyonehasjs.html


It reduces the workload. Using htmx + tailwindcss means to me I could duplicate all the functionality of a eCommerce backend in mere a week for something that I was, unfinished, doing in vue for almost a year.

And after that?

ZERO EXTRA WORK.


I don't care so much when a site with lots of interactivity requires JavaScript, but holy heck, do few things piss me off more than a blog site with articles that require JavaScript just to read text! Whomever writes web software to do that should be ashamed of themselves.

If it's really that hard for you to prerender a page on the server, then just send the text by itself. Can you do that? I don't need you to prerender your header, your footer, your menus, or any of that junk. Give me the text body. Place it in an element your JavaScript code can replace when it boots, if I choose to let it run at all, which I won't if I can avoid it.


Just out of curiosity - why is this such a big deal? Personally, as long as the page has a good UI, I couldn't care less if it is being prerendered or not.


The comment you replied to says

> if I choose to let it run at all, which I won't if I can avoid it.

It's not rare for people with a security/privacy background to only allow whitelisted sites to run JS.


And - what's the point? Websites that want to track you will track you anyway, there's a whole array of technologies that will let them to, and won't display anything to you if you don't run JS. Security objections are pointless, it's not 2001, JS is not anything new, and it's not at all a security risk, browsers are sandboxed and well isolated.


The objections to JS on security grounds are not about tracking. Browsers doing JIT are not as isolated as you may wish them to be.


The point is that you're not supposed to receive the text by itself, the text is only there to entice you to visit the site, but you're supposed to click links, subscribe to the newsletter, get tracking scripts served to you, and buy whatever product/service they're luring you towards with those blog posts.

Nowadays every marketing person will strongly insist on producing tons of semantically meaningless "blog posts" just to generate some new "content" on the site so that google ranks it higher. The blog very rarely exists for you to just read it, most of the time it's a marketing tool supposed to steer you towards something else, and it's not in their interest to allow you to just read the text without the accompanying cruft.


I agree with you.

I block JavaScript on my own site just to make sure.


The real pertinent reason to regulate and to get noscript/basic (x)html web portals (at least on "critical" online services) is that "javascript" requires a grotesquely and absurdely massive and complex web engine, including its SDK.

The only web engines today are blink/geeko, financed by google(vanguard/blackrock), and webkit financed by apple(vanguard/blackrock). They are all written using c++ which has also a grotesquely and absurdely massive and complex syntax, and better not have a look at the compilers... aka double the pain.

In other words: "javascript" = don't have "big tech" controlled software? no web for you!

hard truth: bazillions of online services can work perfectly without a "javacript"-able web engine (javascript alone is some work but several orders of magnitude less), namely basic (x)html forms can do wonders... and actually they were!! But web dev tantrums and planned obsolescence got involved.

The only way out of it is very strong regulation, and I am personally seeing lawyers to seek noscript/basic (x)html interoperability on "critical online services".


Blink, gecko, and webkit are not javascript engines. Those are v8, spidermonkey, and javascriptcore respectively.


> I am personally seeing lawyers to seek noscript/basic (x)html interoperability on "critical online services".

What is your personal incentive to do this beyond the ideological viewpoint of not being beholden to big tech?

For the record I’m in agreement with your stance, I’m just curious about this.


My government regulating body in charge of preserving interoperability with small /alternative tech and stability in time, at least for critical online services, is not doing its job.

Was even planing to bootstrap my own small/alternative "tech" from some small/alternative noscript/basic (x)html components. Namely without the 489374392843 devs financed by google or apple...

Not to mention, it is a quasi-monopoly: blink is a fork of webkit, and geeko is kept alive by google (probably as a protection against anti-trust regulation). It is even worse from an "owner" point of view: from a "vanguard/blackrock" point of view, this is monopoly. That would concern the general anti-trust regulating body.

There are certainly way more to that, I am discussing with lawyers what to present to the judge.


Quickjs exists, and serenity OS' browser (ladybird) show that js engines, and js-able browsers, are doable by developers outside giant corporations.


ladybird is c++ and I was carefull to include the issue of c++ too. Depending on the grotesque and absurd c++ syntax complexity which make any c++ compiler, even naive, out of reach for any real-life and reasonably-sized alternative is a mistake on the same level than the current web engines. I said "double the pain".

It would have been much more interesting to have ladybird written in plain and simple C (with the right compile-time and runtime function tables and NOT compiling with only gcc or clang). Maybe it is not too late to fix that.

That said the real core of the pb, is the "javascript-ed" web itself, bazillions (if not all) of critical online services should work with noscript/basic (x)html browsers.

QuickJS shows that the issue is actually the "javascript-ed" webengine and the c++ language, not solo "javascript".


> It would have been much more interesting to have ladybird written in plain and simple C (with the right compile-time and runtime function tables and NOT compiling with only gcc or clang). Maybe it is not too late to fix that.

Quite the opposite. The serenityOS people are implementing their own programming language, [jakt](https://github.com/SerenityOS/jakt), which currently compiles to C++, and they'll move the codebase progressively to it as they go. I personally doubt that the project could have moved so fast in just a few years if it had been using C instead of C++.


I remember now, I did have a look at that.

I guess if they don't make jakt another c++ (ultra complex syntax), and do manage to have a jakt compiler to machine code, maybe they will beat rust and its servo web engine.


As far as I understand, Gecko and Blink are completely separate, and if we were to put things together with slashes, Chrome/Edge/Opera (Blink) and then Safari (WebKit) share the same lineage (and even Konqueror!). Firefox/Gecko come from a very different direction. To call out investors behind commercial project fails to acknowledge the rich OSS history behind the browser engines. By no means I intend to disregard the tremendous influence Google exerts on web standards with their browser monopoly, but putting it all in the same bin does not help the issue anymore so.


I don't believe in that much incompetence at this level. I believe the current landscape of "web engines" is mostly steered by the people of "vanguard/blackrock" (apple/vanguard/microsoft/etc) and that they know they have a monopoly over the web and they are trying to blur this landscape to avoid regulation. This is so accute, I am seeing toxic malice for the humanity now and nothing else.


It's pretty simple. If you're doing it for profit, use whatever is cheapest/fastest (ie, JS devs and their transient frameworks). If you're making a website for human people meant to last more than 3 years, use HTML.


I guess in real world scenarios, progressive enhancement is the way to go for many sites (like embedded web interfaces).

Offer the minimum core functionality without js. Add js for better UX and additional functionality.


"Everything in moderation"

Spot on


Purists who refuse to load websites with JS are such a small percentage of visitors they can be safely ignored. It's an uphill battle that they're losing very fast. I don't think you can use more than 5% of websites without JS in any capacity.

The whole point of websites is lost if they can't track visitors, see what they're paying attention to, and how to manipulate their behavior. Everyone's doing it, and if you don't, you're at a disadvantage. There are very few websites whose purpose is not to influence you to spend money on something. Most of the articles and posts you read are AI generated, or written by "content writers" never intended to be read by actual people, they're there for the google bot to keep some activity going.


> Purists who refuse to load websites with JS are such a small percentage of visitors they can be safely ignored.

Except they are not[0]. Besides micro browsers there's screen readers that can't even use sites that customize a bunch of div elements with no accessibility tags.

[0] https://24ways.org/2019/microbrowsers-are-everywhere/


For most commercial purposes screen readers can be safely ignored, they are not used by big spenders. Social share cards can be previewed readily and easily prepared without impacting the main content of the website.


Microbrowsers do not run JavaScript. That means if all the routing is handled by JavaScript or the only HTML served is a script tag a microbrowser will show nothing. More and more JavaScript monstrosities do not properly show up in share sheets because they offer no meaningful content to a microbrowser.


As a practising JavaScript-decliner for a few years: your “only 5% work” figure is wildly, extremely wrong. It depends a little on what types of sites you’re dealing with, but for contenty sites rather than appy sites, I’d put it past 95%, with a few notable major site exceptions (so that by content it may well be below 95%) and certain subcategories that are more commonly broken.

And I’d say the battle has actually gained some ground in the last eight years, as server-side rendering of JavaScript content stacks has made headway.


Your mobile website certainly should require as little javascript as possible. You don't know what app the user is running in the background on what phone, so your tests on high end devices do not represent the experience of your visitors. There is no need for these drawer menus (often slow) or complex interactive filters that need 50 AJAX calls. Also the cookie consent modals have become the bain of mobile web experience, especially when they pop up 5 seconds after the user started to read your page on mobile phone. Don't use a modal dialogue, just make a page with the cookie consent and then re-direct to the requested page. What works on desktop don't automatically work on mobile just because you slapped a few media queries in your page.


This is the site in question: https://missingdice.com/dice-roller/. It’s a nice site, I like it.

But if I made that site with JS, and some user wrote in telling me they disabled JS and it didn’t work, I’m much more likely to tell them to either enable JS or pound sand than I am to reimplement the logic on a platform where any requests beyond 125k/mo or 100hrs compute start billing me.


As recent as 2016 I was building some sites without any JavaScript. These weren’t small sites either. You can achieve a lot using some basic forms. It was quite fun


95% of the functionality web apps I've been put in charge of building works without JavaScript. You do lose things like form validation, and some of the links to things like detail forms opens in the same window instead. Or, say, a master/detail selection having a full list of hundreds of items instead of 90% of them hidden based on <optgroup>

It's useful for most of the reasons listed in the parent article.

You can also do quite a lot in CSS these days for interactivity - and with better performance than JS. Expand/collapse sections (with animation), menus including transition delay for more mouse forgiveness, slide out panels.. Also pretty easy to then tweak it for alternate mobile layout.


HTML and CSS have form validation too. You can do something like :has(:valid) when you add in your validation schema in the HTML.


Yep! Regrettably the issue I had was getting JS and HTML5 form validation to work nicely together. The issue was that the HTML5 validation was triggering before the onsubmit which was when we were reviewing the entire form for consistency. So even though we played around with it, and in some places used it for "required" it just didn't work out too well with a lot of the ways we were validating forms. I'm sure we could have made it work - turning on and off the HTML5 validation as needed, using an onclick or some other event instead, but it was easier to just let the non-JS version be validated server-side (which we obviously do as well, as any sane dev would).

I am a fan of the HTML5 validation though - offered feedback when they were working on it that I think made it into the spec like interaction of required and hidden elements...

And, I suspect for most of our forms the simple patterns of HTML5 validation would be fine, it's just that sometimes it wasn't, and it was easier to be consistent in how validation worked and looked... So. Chalk it up to laziness. So long as the forms still worked we decided to let it be.


"I Turned Off JavaScript for a Whole Week and It Was Glorious - Wired 2015"

It's not 2015 or 2016 anymore, not even close. We're the better part of a decade moved on. This is not a valid argument any longer. JavaScript is a critical part of the modern web experience. You may not like it, but that doesn't change the fact.


I just read through that article. Of the specific things it mentions, I think Twitter might no longer work without JavaScript, but everything else is true now just as it was then.

Nothing has changed.

Source: personal experience, as one who blocks JavaScript by default because it makes the web better more than it breaks things. Also very occasional observation of people browsing the web with very similarly-configured browsers apart from the disabling of JavaScript, and of people browsing the web without even uBlock Origin or similar (and that’s really painful to watch).


I block 3rd-party JS by default. (I'm not sure who the second party is supposed to be, I guess that's me). I'll take JS that you serve me yourself.

Basically, I'm willing to trust a site that I've deliberately visited; but if they want to load a score of scripts from sites I haven't deliberately visited, and I can't see content without them, I'm outa there. Often these are big scripts; and the site-owner can't possibly be vouching for them, if he's not even serving them himself.

I think linking out to 3rd-party Javascript is lazy and irresponsible. Any testing you've done on it could become obsolete at any time, without notice. It's done to avoid having to keep up with patches; "No, I don't know what version of JQuery you got served; you were supposed to get served the latest".


Twitter removed their non-JS interface but Nitter still supports it just fine.


The problem is that JavaScript is used for tons of garbage purposes too. I wish for more control in my browser over what is running


Faulting a technology for its malicious uses is a common fallacy, but a fallacy nonetheless. Not saying you're wrong (you're not), but it's not a valid argument against JavaScript.


If something is ubiquitous it will be used in its most extreme negative form.

This is why things being generally available that have the potential to be harmful causes such a debate, because on the one hand it’s useful, on the other it’s harmful.

The reason people blame the tool is because the tool enabled people to be able to maximise their harm, and people will maximise harm when given the opportunity.

In this case: JavaScript enables people to externalise their processing cost into people’s own computers, which is a huge potential for harm as it’s a really asymmetric power dynamic.


It's not an argent against javascript but against enabling javascript by default.


Forms and meta refresh can be pretty great.


I read HN a lot, this topic always comes up so I said “I’m going to build a site that makes these readers happy. Take all their advice, no JS at all. Everything done in HTML. Everything they’re asking for” Turns out the hardest thing to do is to get people to use your site.


> Take all their advice

This site is horrible to take advice from. I would sooner ask why other people are doing things in a given way before trusting the HN hivemind to give advice on how to build a successful product.


Ironic, isn't it, when HN is directly related to a startup accelerator with many successful products and companies.


The annoyance features like JS can only cause us to leave a site we were already interested in.


Content is king, in the end. However serving it in a nice packaging never hurts.


> Building everything you can without js will make your site:

an immediately readable text rather than a program that tries to generate one.


This proved tricky for me because of Core Web Vitals. I made ptable.com work without JS, but I still show "dead" interactive components (the properties area above) because expanding it when JavaScript arrives would destroy my Cumulative Layout Shift score.


give everything that is supposed to be js-only a class called js-only (or whatever) and then just do:

<noscript><style>.js-only { display:none !important; }</style></noscript>

this won't work on failed script loads, only on actually disabled javascript, though


A .no-js class is sometimes handy too, instead of or in addition to .js-only.

I don't like using <noscript> to detect absence of JS, because browsers that are blocking JS (e.g. whitelists), or even blocking some JS and allowing others to load (e.g. inline vs same-domain vs cross-domain vs tracking-blockers), haven't actually disabled JS, and won't necessarily render <noscript> even if the JS won't load.

But there are similar ways to achieve the same thing which don't suffer from this problem. Unfortunately to cover all cases they are complicated, using combinations of inline <style>, <script>, document.write, similar lines in externally loaded scripts which only run if those are loaded, and DOMContentLoaded to do the right thing if a synchronous <script> did not load after all.

Once it's done it works, but it's not trivial to work through all the cases.


i don't think vanilla html saves more bandwidth than JavaScript. if a part of your website needs update (react,vue,fetch) only that part is transmitted over the wire. It's also less jarring for the user, since most browsers would scroll the content to the top.

Also, without JavaScript, some webapps would require storing user state in a server database rather than in memory of the browser.

Fix html and we won't need JavaScript.


When you're using a server side template all your queries happen on the server in one request. Most react/js apps I've seen need to do a bunch of requests to get that same data, one request for content, another for author profile, another for comments, etc. Maybe with non-rest-ish page specific endpoints.

I'd need a more specific example, but for user state I imagine it can be stored in json-web-tokens, or in the url, depending on if you mean like the current query or some kind of actual session data.


> It's also less jarring for the user,

Jarring for the user? Are your users cavemen with severe PTSD? Loading a web page is not a "jarring" experience for anyone. Was the first load of your page "jarring"? Was loading a new page "jarring"?

This idea that users are hyper delicate flowers is ludicrous. Having a page stuck with a bunch of gray gradient placeholder elements while JavaScript loads and does whatever is far more jarring than a page that just loads.

Edit: autocorrect can get bent


The bandwidth you save with partial updates has to be weighed against the size of React/Vue/whatever that had to be downloaded in the first place in order to enable them.


is it even a website if it does not work without javascript, https and websafe colors?


I don’t think people worry about websafe colors anymore, as displays have moved beyond 256 colors


I guess it depends if your website does something valuable or substantial.

If it's just your hobby site... great, develop it without JS.

But if it's a site people pay to access (or requires ads) and they expect some functionality - like Youtube or Netflix say - I don't think this is possible.


I’m genuinely curious: why would Netflix or YouTube not work without JS?

As far as I understand it, YouTube and Netflix are a nicer UX with JS but there’s no particular feature that would kill it completely.

Like, for example, I thought that you can play HLS* manifests from plain HTML5 video tags.

EDIT: I was thinking of HLS and not DASH


No browser supports DASH any more; Edge 12–18 did, but that was lost in the Chromium shift. (Source: https://caniuse.com/mpeg-dash.)

But HLS has wide support on mobile browsers, and on desktop Safari. (Source: https://caniuse.com/http-live-streaming.) So YouTube should in theory be able to support JavaScript-free playback for many of its users.


Whups! Thanks for pointing that out!

Do you think that difference would materially affect whether YouTube or Netflix could exist as a website in a meaningful way without JavaScript?

By meaningful: would it fulfil its core competence of finding videos and streaming them to the browser?


One feature that's kind of difficult to emulate without Javascript is the ability to play media continuously while navigating with the back and forward buttons. Rich Harris talked about this at the 5 minute mark of his video on whether or not SPAs have ruined the web [1].

Youtube actually uses this capability specifically. If you add a Youtube video to a queue, it switches to a picture in picture mode where you can navigate while the video is still playing.

The only other technical challenge is handling animations between route changes. There's a browser api in progress (the shared element transition API) [2], that will allow animations during navigations without client side routing, but it still requires Javascript. Navigation animations are arguably just purely aesthetic though.

[1] https://youtu.be/860d8usGC0o?t=299

[2] https://developer.chrome.com/blog/shared-element-transitions...


For regular viewers, the only pieces of functionality I can immediately think of that couldn’t readily be done sans-JavaScript are search autocomplete, annotations, end cards, auto-play (to the next video), and maybe quality selection (I’m fuzzy on what HLS actually lets you do, never actually worked with it).

Even things like voting and commenting are fairly straightforward to do basically, though you could do vastly better with an auto-resizing iframe (https://github.com/whatwg/html/issues/555) which would basically allow you to load new content into the document with a click (though it’ll clutter the history and make the back button behave surprisingly, see also https://github.com/whatwg/html/issues/6501 on that).


Closed captioning would need to be added to the standard video tag. Which IMO is a worthy addition.


The <track> tag has been supported across the board for 8 years.


Once WASM becomes popular almost no website will work without JavaScript.


Why would people use WASM for their HTML content?


Because that is what new frameworks are doing.


There are situations where JavaScript is being replaced with WebAssembly. There are absolutely no situations where serialised HTML is being replaced with WebAssembly.


Flutter web is one of them. It displays on a canvas element.

In the future with WASM, I expect the same thing, a Rust web app that also displays on a canvas for example.


How are you going to address accessibility when rendering everything on a canvas element? What are search engines going to scrape?


Flutter builds a parallel accessibility tree in the DOM that screen readers and search engines scrape. It's kinda cool, you don't have to rely on HTML, CSS and JS as the application layer, use something else like WASM for that, but for the document layer, use HTML. In a way, it's like the original purpose of HTML is preserved, as I see many people on HN complain about how the DOM shouldn't be used for applications and should only be used for documents.


I see it all the time. Tons of sites out there are replacing standard HTML with JS and won't even work without JS.


I said nothing about replacing HTML with JavaScript.


That is what these frameworks will do.


> Downloading d3.js (a popular graphing library) costs 1 cent in Canada. In Mauritania it costs 0.06% of the average daily income.

The whatdoesmysitecost.com links are now broken and that site as a whole seems to be fairly broken, but it was discussed here last year: https://news.ycombinator.com/item?id=27759583. They claim that the method they used to decide prices is a best-case scenario, but in reality it’s generally not far off a worst-case scenario, regularly off from the likely realistic case by at least a factor of ten. Though there certainly are scenarios where the cost will be even higher than the figure presented.

—⁂—

> Very old browsers like IE < 3, Netscape 1, Mosaic, and others don't support javascript. Almost nobody uses these browsers anymore — but you can bet somebody is.

Your site should not work in those named browsers, because you should be serving by HTTPS only (no, your general-purpose public-internet site is not an exception). And using comparatively recent cipher suites so even things like IE 8 should not work. But the likes of Lynx, sure.

—⁂—

For my part, I default to turning JavaScript off via uMatrix because it makes the web better and faster and lighter far more often than it breaks things. But I also have that extension disabled in Private Browsing windows, so if I want to run something with JavaScript I can open it that way nice and easily.


He also mentions Lynx. I use w3m quite a lot myself in tmux sessions. In fact, I have about two dozen tabs open in w3m right now.

And there is still quite a lot of content out there that works without JS. For example, https://lib.rs/ is lightweight and fast pure rust alternative to crates.io that works great in my tmux session, which is pretty awesome if I'm doing rust stuff in a terminal. (it was originally written as a replacement for the official site which rejected it due to the maintaners being more familiar with their JS implementation)


Those bits about the price in Canada feel like clickbait: technically possible, but with caveats learned only after clicking, like perhaps it's only experienced by people with non-Canadian SIMs who decide to use a terrible roaming plan instead of something better? But with the links being dead, this is just speculation -- not writing in this format (of third-party dependencies filling in what appear to be major gaps) would've been helpful to avoid such speculation in case I'm entirely off-base and that truly is the price of data somewhere.


I don't know about actual canadian prices, but I do visit canada periodically, and only once was I in a position to go through the complexity of acquiring a roaming SIM. My cost is 20¢ per megabyte.

The last time I was there for a week I made sure to exclusively use Firefox+NoScript and setup a number of large regions in google map caching - in past I also used Firefox data saver image option, but regrettably they removed that from the config (hm, I wonder if it's still available in about:config as a hidden option - shame they also blocked about:config unless you use a non-mozilla build)

(my suspicion is it was probably one of the many cool features lost in the rewrite)

There are still data limited cell plans in the US though. Quite a few sold by T-Mobile partner resellers as economy plans.


About a decade ago, the internet disagreed, and everything started requiring JS.


Right, and now that we've spent a full decade exploring that route together we should be in a great position to step back and ask ourselves if it turned out to be a good decision.


s/internet/money

There is nothing inherent to the internet or the promises thereof that require the kinds of interactivity that have driven the tidal wave of front end scripting that has flooded the space. The overwhelming majority of it comes down to designers being cute for no obvious reason and business entities that aggressively abuse their client's browsers and bandwidth solely because cross-browser compatibility is notionally cheaper than maintaining software across several operating systems. The rest is monkey see monkey do.


I'm not convinced by the reasons put forth in the article


I’ve found that the best browsing for me has been a result of disabling HTML and CSS as well. No errors, 0 load time, accessible to anyone even if they don’t have a computer or internet connection. Websites built without JS is cute but if you’re serious about fast, accessible, error-free browsing then using HTML and CSS is really just bloat.


Lol. Fastest browsing ever. Crazy to think that in 2022, we're being advised that sites should work without JavaScript. The percentage of people (actual, real live people) who turn off JS when browsing is probably <1%


There are such apps (pay for parking with SMS) and work just fine.


i agree. we need a browser that connects to port 80, remove all <.*> tags using regexp and put all links at the bottom like a true document format.


I enjoy browsing with curl and discarding the bodies of http requests. The headers give me all I need.


I like the idea. Lets try it out. [1]

[1] - https://ip.ohblog.org/


The infinitely-fast web! LMAO well played.


I go further than handwritten HTML and CSS. I also have a Gopher hole. The only advantage a browser such as Emacs' eww has over an Emacs Gopher client is that eww is included by default. It also leads to good practices such as avoiding superfluous images and styling. Another advantage is the relatively small set of people using Gopher, meaning people who use it are more likely to reach out or similar things, like the older Internet.


[flagged]


But it’s also frightfully easy with such stacks to end up with invisibly broken functionality, e.g. a form that you maybe even intended to work without JavaScript, but accidentally broke in some way; or a widget that is just blank in the HTML and gets filled afterwards (this is very common). As an example of that: in https://www.joshwcomeau.com/css/custom-css-reset/, the first code block is a loading spinner. Why it’s different from the second one which has the content unhighlighted in a pre tag, I have no idea. And then later on there’s the “code playground” widget, which works (so long as you don’t try editing it), including syntax highlighting this time. Why? I can’t be bothered speculating.

All I mean to convey is that the likes of Next.js aren’t a panacea: you do still need to be aware of what’s going on and sometimes adapt your code to work with Next.js. 80%, as you said.

And as an example of another sort of unnecessary brokenness, there’s a “Show more” later in that page that should have been serialised as a <details> element, but it’s done as a <button>. Here the lesson is: lean on what HTML offers, where possible.


Good points. I might even go a step further and say that not only should you be AWARE of how your page behaves after the magic (what breaks without JS, what doesn't), you should deliberately pick and CHOOSE what's OK to break and what is essential content, and design around it.

An example: At my last job (a museum), accessibility was a business need (and grantmaker requirement), and part of that is ensuring that our content was accessible by people using screen readers. At the same time, the website was owned by the marketing department, who had strong thoughts on design and "modern" UX. It was a constant tug-of-war between the two and the JS had to be carefully balanced.

For something as simple as a list of exhibitions or blog posts, it was essential that everybody and any browser be able to see a list with titles, a short blurb, and a thumbnail. That was the "must-have". Nobody should be excluded from reading our actual content.

Then, because we have posts stretching back years, we also wanted a good filtering system, so people could search for certain topics or keywords or whatever. That's the "nice to have" that we decided was OK to sacrifice for certain edge cases.

It is possible to write such a system using plain HTML (by sending HTTP GETs/POSTs and having the backend do the filtering). But such a system was much much slower for the user (multiple seconds instead of single-digit milliseconds, because the backend had to process the request, generate the HTML, and send the whole HTML back... instead of client-side JS that can just filter an already-loaded array in a few milliseconds, or at most request a JSON of filtered post summaries that's much much smaller than a full HTML page).

So we looked at our analytics and determined that almost nobody (< 1%) was using the filtering system at all. Mostly people were just going directly to individual blog posts (social shares or whatever) or looking at the top few most recent ones. It was such a rarely used feature to begin with that we decided that it was OK to sacrifice for the non-JS users.

I'm not saying that this is the ideal approach, just that it is AN approach -- dictated more often by business needs and resource limits, not developer ideologies. Some of our peer institutions (other museums) went opposite ways... some wrote in pure HTML and a sprinkling of bespoke vanilla JS, with very fast and lean pages that didn't have much clientside interactivity. Others went to the other end of the spectrum, creating richly animated interactive stories that are more like a educational game than a webpage. As a developer I have my own preferences on such things, but in the grand scheme of things, our ideologies rarely matter.

What practical frameworks like Next do is allow us to find an acceptable middle-ground between a lot of stakeholders, balancing UX, DX, DevOps, business requirements, resource costs, etc. Next is itself a frankenstein of technologies -- mirroring the modern JS and cloud service ecosystem -- that doesn't try to preach any particular ideology. It looks at the chaos out there, accepts it for what it is, and tries to harness the bits and pieces of it that makes sense to use in an average website. I think it does a pretty good job.

You can, of course, make similar evaluations and tradeoffs as an individual dev. But it's a LOT of work to think through and build and maintain. It was easy to make a JS site and no-JS site side-by-side back in the day (along with frames and no frames!), but that's not really a practical approach for the complexities of modern sites and stakeholders; it would increase development time by an order of magnitude or more. Progressive enhancement is great for simple sites that are really more "documents" than "apps", but even that starts to break down once you move from "documents" to "document store", as in you have a pile of documents originated from some content management system / production pipeline / editorial review process that multiple stakeholders work on together. At an organization of any real scale, you can't have your webmaster editing raw HTML every time somebody wants to update a blog post.

Web dev isn't a place for purists. The whole history of the web is "hack on top of hack". HTML is a barebones document markup language more aligned with Gopher than modern web apps, and JS itself was a me-too panicked reaction to Java (and later, ActiveX and Flash). That it grew to such popularity was an unfortunate coincidence of history, as the lowest common denominator that browser vendors could accept. V8's massive improvements to the JS runtime cemented its own fate, taking the web from "JS is useful for adding a little bit of interactivity" to "JS is now fast enough to BE the web". Now V8 is pretty the de facto operating system of the web, for better or worse. As individual developers, this isn't a system we work in, it's a system we work around because it's just what history left us. Every page we build is a hack on top of a hack, a workaround for another workaround that somebody wrote into spec or engine a decade ago for the needs of those days.

I hate it as much as anyone, but until we can get a new web ecosystem (crypto-web doesn't count), we're just kinda stuck with it...


> you should deliberately pick and CHOOSE what's OK to break

Absolutely. Unfortunately this is harder to do in a Next.js world, because it’s obscured by abstractions. Life was much easier when you were clearly writing regular HTML and enhancing it with specific JavaScript because the whole thing was deliberate. With JavaScript frontend frameworks plus server-side rendering, the default is to be unaware of all of this stuff and you have to go at least a little out of your way to do it properly.


> With JavaScript frontend frameworks plus server-side rendering, the default is to be unaware of all of this stuff and you have to go at least a little out of your way to do it properly.

Too true. In 20 years of web dev, my job has largely gone from "writing inscrutable code" to "reverse engineering someone else's inscrutable code".

Heh, that's part of the joy though. Now it's no longer "production is down again, fix it!" it's "oh, AWS is down, wanna grab a coffee while we wait?". Or on the FE, "oh, I found the bug and submitted an issue on their Github, hopefully they'll fix it in the next version".


Does Next SSR like this also submit traditional web forms and handle them server side? Or would that count as the 20% that's unsupported?


I would say forms are an adjacent technology. There is nothing inherent in React or JS that would either magically make or break forms.

A form is just an bunch of input fields and a submit handler.

If you write your fields as real HTML input fields (instead of, say, weird bespoke divs that happen to have key handlers, yuck), you can still add JS to them for autocomplete and validation and such. But they'll still work without JS.

The other part is the submit handler. If you capture the submit button onclick to do something (like submit it using AJAX) then that won't work. But it's easy to add a HTTP POST fallback, and honestly that should be a best practice with or without Next. It's been like that since the jQuery days and before.

Beyond that, I can't think of anything specifically involved in SSR that would affect forms. Have an example?

Edit 1: If you're using a third-party module/framework/service to create a form, whether it will work with JS depends on its specific implementation. The MUI framework, for example, has higher-order form input abstractions (like the <Input> component here https://mui.com/base/react-input/#introduction) that depend on React and JS, but are really just wrappers around the HTML <input> element. It will still accept text without JS, just lose some of its secondary features. On the other hand, with something like Typeform... I couldn't even get the actual form to load without JS enabled: https://www.typeform.com/templates/t/demo-feedback-form-temp...

Edit 2: I thought of another way to interpret your question. In addition to the frontend parts of Next, which the above addresses, Next also has API endpoints and edge functions that can act as form handlers if you want them to. That only works if you're hosting on a live Node server (like Vercel), not if you're baking to static clientside HTML for CDN distribution. (The difference between `next start` and `next build`)

But that's kinda a tortured way to deal with form submissions. Usually the webforms would POST to someplace that's a preexisting business requirement (a Salesforce or Hubspot endpoint, for example, or even an email address), so you don't really have to use the Next middleware to handle submissions. You COULD do that in Next if you really wanted to, but I wouldn't myself... it would just be reinventing the wheel for no good reason.


That depends on how the dev building the site writes them. They can work but they usually don't.


> a lot better than having your site break altogether.

that's a humble baseline, isn't it?


It's better than nothing, though. Often the alternative isn't "let's not use a JS framework and just build a bare HTML site from scratch", but rather "let's just use the framework and forget about the few users who disable JS."

So while Next & SSR isn't a perfect solution, it is often the only practical thing to do when the team or company doesn't want to spend resources on a no-JS solution.


I make sure that my websites require JS to function even if not needed just to annoy the blockers. It's 1% of users demanding the world bend to fit their preferences, like a lot of annoying vocal minorities.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: