Hacker News new | past | comments | ask | show | jobs | submit login
Everyone Has JavaScript, Right? (kryogenix.org)
68 points by 0xedb on June 12, 2021 | hide | past | favorite | 88 comments



These people march into every JavaScript discussion, carrying the same cross on their back since the first day Netscape supported AJAX. They lament what a travesty the internet has become thanks to this mongrel language which broke the purity of HTML.

What really gets me is this kind of Good Samaritan plea to developers to just “do the right thing” as if every line of JavaScript was paid in orphan tears and puppy blood.

It’s 2021 and at this point it really feels like they’re begging the question: “Sites that require JavaScript are not as good as sites that don’t because they can’t operate without JavaScript.” We already know that. It’s well understood what makes a site feel slow. Progressive enhancement is an ideal situation — it’s just not something that every organization prioritizes. And for some web apps, server side rendering adds significant overhead to problems like caching, authentication, and multi platform compatibility.

If you’re honestly invested in progressive enhancement, contribute to open source tools that make it easier to ship.


Since when is writing JavaScript easier than HTML? what amount of extra complexity and special tooling is required to make it easier to "ship" HTML?

I by no means am I ann all JavaScript is bad zealot, but I do browse with it disabled by default so I don't have to trust that every single third-party developer is good enough to prevent some malicious douche nozzle from injecting some rogue script that either turns my web browser into a botnet, starts mining crypto for somebody, or exploits the newest silicone defect and is now uploading my in-memory keys to some cloud provider. and while I would agree it's unreasonable to expect complete functionality from your site with the scripting disabled, I don't think it's unreasonable to expect your page to at least partially load. I get it JavaScript developers for the most part don't give a shit about security* but some of us do and it would be nice if it wasn't a constant battle between the two of us.

*to you the random JavaScript developer who actually does care you notice I didn't say all but you have to admit your colleagues are often guilty.


Pure HTML+CSS websites are, of course, way easier to code than ones with heavy Javascript (Assuming we're talking about hand-coded websites, and absent other factors like the developer's personal familiarity.)

But there's also things you can't do without Javascript. For example, if you want a fade-in animation to play when the user scrolls to a certain spot on the page, you'll probably want to calculate the user's scroll position with Javascript. If you don't address the no-JS case specifically, the element will never fade-in for users without Javascript!


You should be able to do that with pure html/css as well by abusing the newish lazy load feature and probably some SVG! Yay for modern web


> had-us-in-the-first-half.gif

lol, you got me in the first half there; I was already mentally constructing my reply before I eventually got the sarcasm :D Well played!


There are loads of cases where building something in JavaScript is easier than HTML. Because a non JavaScript version of lots of applications means a dynamic server generating HTML. I would argue that building secure, user oriented data-backed applications out of JS clients with backend microservices in the JAMStack style is actually easier than doing it in classic serverside scripting LAMP stack or Ruby on Rails style.


> There are loads of cases where building something in JavaScript is easier than HTML.

Yeah, I'm aware of these cases. I'm not demanding your web3.0 real time interactive multiplayer pokemon clone work without JS. But it would be nice if the text and images on your root index page would load without JS, so that I could decide if I want to run your arbitrary code on my computer without feeling like I'm rolling the dice.

> Because a non JavaScript version of lots of applications means a dynamic server generating HTML.

You haven't heard of page pre-generation have you? You could use JS to generate all the basic html you need, save that, and serve it somewhere from some basic HTTP server. Then the site would still do something without JS.

> I would argue that building secure, user oriented data-backed applications out of JS clients with backend microservices in the JAMStack style is actually easier than doing it in classic serverside scripting LAMP stack or Ruby on Rails style.

I would argue the exact opposite, but this absolutely comes down to familiarity. I've written more python and html than I have JS. But what's easier still is writing a basic blog with the exclusive intent to deliver information, rather than increase growth, retention, conversion, or whatever metrics sales/growth is asking for today.


A basic blog can obviously be built using a simple DB-backed server side framework. The original blogs were basically designed almost entirely around the capabilities that were easy to accomplish with that architecture - you can practically read off the underlying relational table structure of Dave Winer’s blog from the RSS spec. As you render each page serverside you just SELECT in the blog content, the index links, the tag cloud, etc. But it has to be said (speaking as someone who has implemented this a few times) comments were always the tricky bit.

You’re right that SSR/pregeneration approaches are a reasonable modern approach to building a simple blog, but still comments are an annoying obstacle to a simple pregenerated page model.

And I’ve got to say: it for sure gets a lot more complex when what you’re building isn’t a blog but a blogging platform.

I’m absolutely not arguing that such a system shouldn’t serve up relatively simple prerendered HTML for article pages by the way - I mean, basic SEO kind of pushes you to do that - but that the rest of the stuff on that blog - the tag clouds and the comments and so on - are actually easier to add through clientside JavaScript integration of simple backend services, than through adding complexity to your serverside HTML generation model.


I think what people lament is not JavaScript, so much as using JavaScript unnecessarily. Recreating (and usually breaking) navigation history, recreating hyperlinks, losing scroll position by interfering with it or with infinite lists, recreating forms, etc.

JavaScript is overused and often at the expense of usability.


This. There are so many pages that could display perfectly without JS, but they don't because someone messed it up.

Think about it: the very existence of the Reader Mode in popular browsers shows that users no longer can read information on web pages comfortably in the browser and need another specialized layer to remove all the cruft.


The author of this piece has been working to help people implement progressive enhancement for nearly two decades now.


I think we can agree that the author is genuinely interested in the problem. Personally, it’s exhausting to hear what I believe are the same people ask web developers to build with competing constraints at the same time. A good website must be…

- Renderable to both modern and older browsers

- Designed for mobile and desktop screens

- Accessible to those with disabilities

- Supporting some level of offline web app use, ideally avoiding a platform specific App Store

There’s all of that and more, but it’s not is impossible. In fact, requirements like accessibility are often mandated by law. But if I was building a new product, I’d honestly prefer a smaller set of JS friendly customers that I can serve a great experience to, and go from there. Progressive enhancement usually runs opposite to this issue. It’s very time consuming and adds an additional platform complexity to your existing technical product problems.


> Renderable to both modern and older browsers

If I disable JS and/or cookies, I should be able to read the content the web page. Just send over the HTML and CSS and let the web browser do its thing.

If I get a blank page when either/both are disabled, but I do a 'view source' and get pages and pages of stuff, IMHO you're doing something wrong. It doesn't have to have all the functionality, just please have some text be thrown onto my screen.


What's funny to me is how many people forget that JavaScript is a web standard. By all means disable JS - it's great that the web offers this level of control. But if something goes wrong, just remember it's the client breaking the standard, not the website. Direct your blame appropriately.


The WHATWG HTML standard explicitly allows for disabling scripting [1]. You are not directing your blame appropriately.

[1] https://html.spec.whatwg.org/multipage/webappapis.html#enabl...


That does add some murkiness, but I don't take the existence of a switch to mean that websites need to accommodate both types of users. It's a technical requirement for features like <noscript> to work, so it needs to be defined in the actual standard.


Sorry, we only support users who download and autostart our spyware, I mean user experience enhancement plugin.

I's not required to create a site at all, but agreed options are good. And "Because I could" is a perfectly fine reason. But the argument is "should" I require JS not "could" I.


Right - the standard provides tools you can use as a site to discover that scripting is unavailable and gracefully degrade - if you so choose.

It also provides marquee and blink elements but you’re not obliged to use them.


Encrypted Media Extensions is also a web standard, yet I won't let that stop me from discouraging its use.

Flash used to be a de-facto web standard. Yet sites building their navigation entirely in flash were just as shit as sites requiring JS to display some text and images.


Yes, absolutely. This isn’t news to businesses that have built websites that rely on JavaScript. They know not everyone has JavaScript.

When movies started coming out on Blu-ray and not VHS, you could put a graphic like this out to rebut the assumption ‘everyone has Blu-ray’ - and the studios would nod and agree and say ‘yes. We know. We want everyone to buy Blu-ray players and we don’t care about serving people who only have VHS machines’

Websites know some people don’t have JS. And they want those people to use browsers with full JS support, and if they choose not to, who cares?

I mean, there’s a whole stack of people who were excluded before this JavaScript flowchart even got started: do they have a computing device? Is it connected to the internet? Is it charged or plugged in to power? Have they heard of your site? Do they speak a language your site is offered in?


I think it depends on the end service.

For instance, some critical services would do best to be available without those technical constraints.

So for a chat system for video games? Who cares. For my online banking when they no longer staff tellers? It should probably be more of a consideration, or at least offer a parallel, stripped-down portal. Not everyone who needs to access that kind of service has the means for a fast enough computer to run some of the websites out there.


But all services can choose what they want to support. If I want to play Call of Duty I need to put the disc in an X-Box, not a DVD player. If I want to use google maps, I need to load it in a browser with JS support, not an e-reader.


Funny enough, I think you're saying the same thing I am but looking at it from a different angle.

I'm not saying JS-forward web apps shouldn't exist. I'm saying that some services should not only offer JS-forward web apps if their obligatory clientele ranges across the entire spectrum in terms of financial or technological resources and physical ability. Screen readers do not fare well on poorly structured and JS-forward web sites, for example—but the blind still need to access their online banking when their local bank branch removed their tellers and replaced them with a terminal that accesses the web portal alone...


It's not that simple - modern screenreaders cope pretty well with dynamic UI elements (so long as they are implemented correctly); it's actually pretty unfair to force someone who has a screenreader to keep having to start again at the top of a page and navigate back to where they were every time they interact with the page, because you wanted to keep things simple and clean and just serve up plain markup from a server. Sighted users can keep their place on the page across a reload, but screenreader users might be back to hearing the title and top level nav options again.

Remember that what blind users use to interact with plain web navigation is called 'assistive technology' - it adds more interaction options onto plain web pages. You couldn't build such a thing with just HTML.

In general, JavaScript offers the opportunity for web-delivered experiences to be improved - for all their users.

Browsing with Javascript disabled means you are not going to get to enjoy those benefits. And I don't think that businesses should all feel obliged to try to provide a completely equal experience for someone who chooses to browse with such a restriction.


No. I didn’t take this site as a religious anti-Turing-tarpit sermon.

Rather it is saying that not every load of your page will run the JS. 99% might but 1% won’t to make up some stats.

So you need to decide how your app behaves when there is no JS.

I work for a very rich desktop experience web app company where there is nothing you can do! Best things is have a nicely styled “sorry” message as a default.

If you are operating a mail client, maybe you can have a seperate low fi mode. Or choose a tech that allows server side rendering, which is running that JS on the server and then losing up that page.

There are trades offs for every site. From no-JS-in-the-first-place static marketing sites to web app versions of native apps with all the whistles.

It comes back to technical and architectural and UX design.

The post here is a reminder. Some of your client loads won’t have JS running, of that you can be sure.


These people comment only reading the title. They lament their off-topic comment because they never actually read the article.


I'm not sure the OP can appreciate your comment though.


As somebody who browses with JS disabled by default: Would you let me give you a 2KiB executable of native code? Probably not. What makes you so much more confident that if I give you 2KiB of JS, that it's not going to exploit you?

We are far past the "do the right thing" plea. At this point, many major sites are simply not on my browsing menu, and I write tools to extract any data that I care about. It is now a systemic issue: How much do you trust random publishers to not attempt to compromise your Web browser?


An interesting blog post from the UK Government digital team on progressive enhancement, suggesting from their findings that 0.9% of visits to a national government site are from people without functioning JavaScript, who haven't intentionally turned it off.

https://technology.blog.gov.uk/2016/09/19/why-we-use-progres...

And the original research is available at

https://gds.blog.gov.uk/2013/10/21/how-many-people-are-missi...


Their research was conducted in 2013, so I do wonder how applicable it is today. For example, as I recall Blackberries didn't support Javascript in most cases, and there were probably still a lot of those in active use as of 2013. Not so much in 2021.


I encountered the "your ISP injects broken Javascript into all HTTP pages" problem around 2016. Someone thought I had done a poor job on a site I built for them. It turned out the Javascript on the page wasn't from the site I built, it was injected by the ISP, one of the UK's largest mobile data providers.

We fixed the problem by switching to HTTPS.


I used lynx on a computer of mine a while ago when an upgrade broke the login. I could switch to a different terminal to do that, find the issue, then run the commands to fix it.

I've also used the Raspberry Pi as a computer in the past and used Dillo (or similar browser) as it was lightweight.


I don’t doubt that there are still plenty of users on browsers without Javascript (many of them here on HN), but I wonder if they would still constitute 0.9% of visits.


Probably <0.1%. If you sorted your site traffic by visits per user agent, it probably would be lost in the long tail of noise.


I mean if you consider your uncommon users noise, I kinda get why you wouldn't want to make any effort to support them.


Nice list of rare edge cases.

Worrying about somebody using „Chrome data saver mode“ before your startup gets 100k hits per month means to me that your focus is completely off.

Instead, perhaps improve your core product and talk to actual users today?


You know I want to saw a video where Linus Torvalds describes good code as code that doesn't have to handle edge cases differently from non-edge cases. HTML will always render, so why isn't that always the minimum default? why is starting with HTML and adding JavaScript harder?


Feel free to try to tick all the UX best practices from Nielsen Norman with pure HTML, and be done by the end of this week since we would like to ship.

See where the problem is?


> See where the problem is?

No, I don't. In part because I don't tick boxes, I just (try to) build good shit. Also because I don't expect every UX "best practice" from [random internet authority] from a site with JS disabled. I just expect the text on the site to render somewhere readable.

Usually, the argument from the "js is bad" crowd, isn't that the site should be as functional with JS disabled... only... that the basic content... should still render...


You are 0.001% of my visitors and any rational business will refuse to invest resources to serve your unique needs.

If my current frontend framework of choice happens to not render any HTML without JS, I am sorry, my site will be inaccessible to you.

Because that frontend framework allows me to iterate much faster and improve my product for the vast majority of users.


> You are 0.001% of my visitors

My javascript user metric collecting service says all my users have JS...

Imagine that :P

> any rational business will refuse to invest resources to serve your unique needs.

I'm not convinced that's rational, but all the same. Any rational engineer would rather do the job correctly, rather than what's easy. Which is what the argument comes down to. One the minimum always works, the other way is easier (for you).

> If my current frontend framework of choice happens to not render any HTML without JS, I am sorry, my site will be inaccessible to you.

Don't be sorry in this case, if this is the tradeoff you've decided to make. That's totally fine. It's your project, and your code. You absolutely should make the decisions you think are best. I'm just some random idiot on the internet who thinks your decisions are bad, and your framework is a dumpster fire waiting to implode. I like to write code that works, and I think code that works in every case is objectively better than code that works in most cases, and is easier to write. But I suspect the code we write and projects we care about are very different.

The difference seems to be that I care more about the code than I do about the cash. You frame all decisions from the context of a business trying to make money. You're right when you say:

> Because that frontend framework allows me to iterate much faster and improve my product for the vast majority of users.

That is probably the best way to make the most money with the smallest expense. But I have a different set of standards for the work I do. And just to be clear, please note I didn't say my standards for my work are better than yours, only that they're different.


> I like to write code that works [in every case]

But your code does not work in every case:

Image a user that would like to access your website using an even older setup than simply no JS, let's say he's using lynx on a 50 by 20 character ssh session on an old Android phone with a broken keyboard.

Your website will look like shit, it will be absolutely unusable and the user will have a mental breakdown.

If you counter that your website is only one line of text that says "Hello world!", and that would work on my imaginary lynx setup, let me counter as well:

I am now a user that has no computer at all. I would like to access your content by calling. Where is the phone number I can call to talk to a human in my language, at 03:30am in the morning?

Remember, not everyone is using the latest technology! A company should be accessible even if the user doesn't have WASM/React/JS/CSS/HTML/HTTP/TCP/a computer/a Chomsky universal-grammar based spoken language/...

I think we all have edge cases we simply cannot serve. I agree that there might be different thresholds, though. :-)


I’ve been in web tech for almost 20 years. Every time, in every company I’ve worked for, the discussion of “what about people with no JavaScript” eventually gets brought up… every engineer in the room says “who cares about those people” and we move on.

It’s never been worth worrying about non JavaScript clients (except search engines)


> I’ve been in web tech for almost 20 years.

That's probably the difference in experience then. I've been a software engineer for almost as long, but I'd never consider describing my work as just "web tech", and the engineers are all willing to make a reasonable minimum effort to at least render "something" without JS. The difference is probably between web developers, and software engineers generally.


I would be fascinated to know if “optimize for Chrome data saver mode” has ever been included on a ticket or epic anywhere.


Not exactly the same, but regarding a slightly similar setting in browsers. I had problems getting some 3rd party widget working in a webpage.

I had enabled 'Do Not Track' in the browser, which broke the widget.

I'm sure the 3rd party provider regrets listening to the privacy nerd who insisted on supporting this header.


Yeah, it seems like the page answers its own question.

"Everyone has JavaScript, right?"

After reading the page...

...basically, yeah.


This is the https://en.wikipedia.org/wiki/Conjunction_fallacy in disguise. What is more likely; that JS is working correctly for everybody, or that at least one of the article's listed conditions applies and JS is not working correctly for everybody?


As the world has moved to HTTPS everywhere, ISPs injecting JS is a non-issue. They can't as they can't see what's in there. They can't even block JS.

It's browser plugins right now -- the main snoopers.


Depending on the browser plugin, they can be there to prevent snooping. Going to booking.com (chosen by percentile dice from list of top 100 websites), there were several snooping javascript downloads blocked by ublock origin, from google analytics, google ad services, and lms-analytics.


TFA itself concedes that we’re (generously) talking about 1% of users.

Until you’re in 6-digit MRR territory, this shouldn’t even cross your mind when building a Web app. There are way more important things you should be doing to your product/business.


Hm, this makes me think about my personal website, which uses a fancy Javascript fade-in because I like fancy animations (and the site is partly my playground to do what I want). However, I also have a noscript stylesheet to ensure the website is functional without Javascript.

Does anyone know what that means for e.g. flaky mobile browsers?


<noscript> is only activated if JS is actively disabled. None of these failure modes will trigger it. You can use onerror="foo" on <script> tags though

EDIT: Actually the data saver thing is called "noscript" so you'd hope it'd activate the tags. Network failures though will trigger onerror and leave the rest up to you


I'm thinking a lot about this while planning a frontend architecture for my new project.

Currently, I'm checking out Cloudflare Workers.

They're compatible with service workers, so they might be a viable fallback solution for people without JS.


I've been hearing that the Jamstack type approach, along with a supporting CDN like Netlify can achieve this pretty well. No experience actually putting into practice yet though


Not all HTTP consumers are interactive web browsers.

Web scraping may not be your value proposition (or you may not think it is), but it could very well be your users'.

And yes, I'm that small but vocal minority that makes use of console browsers, wget and curl, and heavily disables JS in my graphical browsers for numerous reasons. Gratuitous breakage is utterly unnecessary, and that's the point. Not that you cannot use JS, but that functionality which is independent of JS should be supported, when JS is disabled.

Everything else misses the damned point.


> It’s only natural that as front-end developers we want to flex our technical skills... There’s an issue with lending more of your focus to the possibilities promised by new technology though: your focus moves away from your users.

> Meeting our many users’ needs is number one on our list of design principles.


The people that disable Javascript are missing a whole world of interactive applications. Not every website wants to track you down and share that information with Google.


The people who disable JS:

- May still want to see the noninteractive element of that application. We call those "documents".

- May selectively choose to enable JS in specific cases where utility outweighs other factors. But JS dependence impedes on that case.


>The people that disable Javascript are missing a whole world of interactive applications.

What?

Interactive applications came way before the Web and JavaScript.

Interactive applications currently exist outside of the Web and these same applications may not have been built using JavaScript.


"Have they switched off JavaScript? - People still do." Is that really a valid case 2021?


I browse with uMatrix and JS disabled. If you're site makes me enable more than 3 different 3rd party scripts, I usually just fuck right off. I actively work to avoid it, and will remember it and actively trash your site and mock your users to anyone given the chance.

Yes I'm that much of an asshole.

Does your site mostly work without JS untill I want to do/use 'other thing'? Then I'll normally just white list the whole site because I know that you've at least considered what needs to be and what doesn't need to be included.


What about sites with only first-party scripts?


If your site only loads scripts from local you and I are good. I've Even globally whitelisted a few like jQuery and the rest.

but more often than not I have to scroll past more than a few untrustworthy data mining domains just to find the CDN that's distributing your magic JavaScript. Those are the ones that piss me off.


I'm known to just edit the page directly in the console if it annoys me too much. I'm a _heavy_ user of privacy-preserving extensions.


Yes - see the popularity of uBlock Origin, NoScript, uMatrix etc.


HN may be giving a skewed idea of how truly popular things like NoScript are.


Neither uBlock or uMatrix blanket disable JavaScript.


but being able to disable it is the whole point of installing it and it does disable third-party by default.


I know they have the capability to block scripts, but this is the first time I'm hearing that blocking all javascript on all websites is the primary purpose of both uBlock and uMatrix.

Their strength is in selective and goal-oriented blocking, in a way that doesn't break nearly every website by default.


My uMatrix config does :P


On nearly every thread on Hacker News there are a few posts from people who have JavaScript turned off who will say “this site is blank”


Despite all of them knowing exactly why it's blank and what they'd need to do to use the site ;)


But it's a choice, like not having a smartphone, the rest of us still want and can have nice things. In the cases where it's not a choice it's something else, some may need to make sure it still works. I'm not going to do it, I have no reason to.


On many sites javascript is about ad-tracking, not "nice things".

For example, cnn.com shows no content with javascript disabled. By itself html is perfectly capable of displaying text and graphics, that's what it is designed for. Taking a quick look it appears that about half of the scripts loaded on their front page are for ad-tracking. A gizmodo page I'm looking at now wants to load 37 scripts for some reason. ublock is showing me 14 ad-tracking items.

I'd rather avoid such sites. So I go somewhere else if they don't work with javascript disabled. I can make per-site exceptions, but I'd rather not.

(In the above examples, I use lite.cnn.com to consume news articles news MUCH more quickly, and gizmodo with javascript disabled still showed me the text of the article I was interested in, so I'm happy.)


I agree with you, this shouldn’t even be an issue in 2021.


Is it possible, without Javascript, to have:

- drag drop reorder and move?

- accessibility?

- draw on top of webpage?

- popup menus?

- show/hide sidebar?

Any links for examples?


>accessibility?

HTML is pretty accessible by itself, simply follow the standard and WAI-ARIA.

>popup menus?

https://www.w3schools.com/howto/howto_css_dropdown.asp

>show/hide sidebar?

You can use the checkbox way to handle that: https://codepen.io/Sfate/pen/nLBGr

The rest isn't really possible without JS as far as I know.


That popup menu is not accessible though, no keyboard navigation and if you really wanted follow all accessibility guidelines you still need javascript to toggle attributes and stuff. https://www.w3.org/TR/wai-aria-practices-1.1/examples/menu-b...


If a native html feature isn't accessible, that sounds like something the browser makers need to correct, no?

(Although, you could argue the end-result for users who need accessibility features is the same, regardless of who is at fault.)


It also isn't a "pure" HTML solution as all the magic is happening on CSS. So, convoluted CSS or two lines of JS?


As someone who browses with js off by default (uMatrix rule) and has implemented a fancy drop-down using css rather than js[1], my current opinion is that it's better to display all menus expanded by default and then hide them if/when your js loads. It's less effort to write and, more importantly, browsing with js disabled means I want interactivity off. If I want the "full experience as intended", I'll enable js. I frequently do. But I browse without it by default because frequently that experience is a downgrade from just the raw (well, styled) content. Leaving the fancy stuff to js suits everyone best.

[1] please note the content on the page is outdated, this is just a prototype -- the drop-down below "PayPal": https://sdproto.gitlab.io/donate/


The progressive enhancement way to do show/hide sidebar used to be to have the sidebar content displayed in full at the bottom of the page, then use an anchor link that jumps the user down to that content.

Then the JavaScript that loads can hide that area and cause it to display in place when the show link is clicked.

Or... these days you can get a lot done with the HTML summary/details elements - including implementing popup menus.


Popup menus, sidebars, drawing on top of a pages can all be done to a certain extent with some CSS

Accessibility is possible with standard semantic HTML and aria tags (again to a certain extent)

D&D is probably not workable though


1. Probably not in general? Maybe with a vertical <input type="range"> for each item and a lot of server-generated CSS hackery, for small lists?

2. Use ARIA

3. Use position: absolute?

4. Create it with position: absolute/relative and use CSS :hover selector to show it (or see #5 for explicit open/close)

5. Use <details> or a checkbox along with the appropriate CSS


I don't see NUMBERS attached to the claims.


JavaScript kind of creeps up on you.

I don't work on any web pages that are particularly fancy or sophisticated. The main page I work on is the ordering page I maintain (but did not write) for a small company. Their product is pretty simple--a subscription service with some downloadable software to use the service. The ordering page has a section where you can select your subscription length (monthly, quarterly, yearly) with some radio buttons and a checkbox for buying a copy of the software on CD-ROM, a section for entering your email address and billing address, a section for entering your credit card information, and a button to submit your order.

They really want to keep this as one page.

Originally I'm pretty sure they had no JavaScript on the page. That was back when we didn't have to collect sales tax except to customers in our own state, and before the state's tax laws made online sales based on buyer location rather than seller location. The checkout page simply showed your total without tax, and had a note that said if you were in our state we would add tax of X% where X was the sales tax at our office.

Later, when the state switched to buyer based taxing, I believe JavaScript was added. That message about taxes started off just saying tax would be added to in-state orders without stating the rate. When you filled in the zip code, the JavaScript would see if it was an in-state zip code and update that to mention the rate for that zip (which it got from a static table in the JS). It might have even been fancy enough to also show you the total.

But if you did not have JavaScript it still all worked.

Somewhere in there the JavaScript started doing checks for things like missing form entries and blocking submission until the user fixed them. Still worked without JavaScript.

When we started selling in Europe and the UK and had to collect VAT, the JavaScript got a little fancier. Now it knew the VAT rates by country so it could show the VAT before you ordered.

The PHP script that served the page would decide what currency you probably used based on your IP. I modified that to include a dynamically generated JavaScript table on the page that had the prices of all the products on the page in USD, EUR, and GPB, added a currency select dropdown on the page, and added JavaScript to switch all the prices and totals on the page if the user changed the currency.

But it still worked fine without JavaScript. You might see the prices listed in EUR say, and switch the dropdown to GPB, and your order would then be done in GBP using the GBP pricing. You just wouldn't see the actual GBP amounts until you got the receipt page.

What finally stopped it working without JavaScript was a change Visa and MasterCard (and I think some others) made to what merchants have to do when accepting credit card and when storing cards for recurring or on-file billing [1].

You must show the total that will be charged and get approval from the user before submitting the card. You must tell them how often they will be re-billed and for how much. You must tell them what card will be billed (and re-billed). You must get positive confirmation that they have seen this information.

For that, we added a section at the bottom of the page that spells all that out, and has a checkbox they have to check to confirm they saw it. The ordering button does not work until that checkbox is checked.

As the user changes what products are selected, JavaScript is updating the total in the order summary, and updating that disclosure section at the bottom of the page. Same as they change currency. If they changed address, the JavaScript calls a tax API on the server to find the tax, and updates the page. Same if they change the credit card information. The JavaScript unchecks the acknowledgement checkbox if they make any of these kinds of changes after they acknowledged before.

Making that work without JavaScript would, I think, require going to a multi-page order flow which management really does not want to do. I've done some measurements, and everyone who has tried to order over the last two years (both real and bots up to who knows what) have had JavaScript enabled so it is pretty hard to argue that adding a separate milti-page order flow to handle non-JS people is worth the development time.

And so now we've got a page that requires JavaScript. No one set out to require JavaScript, but here we are.

[1] Well...they said they were making this change. They originally announced it, then when it was close to going into effect many payment processors said they weren't ready and Visa pushed it back a year. The payment processor we use got it supported by that second deadline, and we got our site and backend changed, but others were still not ready and Visa delayed it again. I'm not actually sure if it ever went into effect or is still being pushed back.


When I see on an empty white page:

> You need to enable JavaScript to run this app.

I know I have no interest in your project as you have no interest in communicating with me, only funnelling me into your recently launched startup.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: