I recently started to fork webkit to make it a Webview with a reduced attack surface. [2]
The most interesting part was the Quirks.cpp file [1] that contains literally randomized css classnames inside the web browser code because a major website was so shitty and spec violating in their implementations.
I mean, fixing a website in a browser source code...this shit got out of hand real quick yo.
The problem with all those drafts is that Google keeps doing their own thing, and others are forced to try to catch-up or implement the same bugs/quirks that chromium does. Everything is rushed so QUICly that even Microsoft gave up at some point. And at some point in the past google realized that they can own the web if they own the Browser. And that's what they effectively do now, because the competition isn't really a competition at all anymore.
I thought you were kidding about Quirks.cpp, but it is indeed true. I can imagine a few nightmarish scenarios where a quirk that was originally a fix becomes a bug when a company decides to alter its website. Worse still, the new code will probably work perfectly in a test environment, and then fail when you deploy to your actual domain. Some developer somewhere will be tearing their hair out trying to understand what the heck is going on...
It gets worse if you search for "microsoft.com" [1] or "bbc.co.uk" in the codebase. They literally grant cross domain storage access because Microsoft's login workflow is broken in regards to CORS headers.
I mean, imagine a developer trying to fix their code behaving completely different on iOS and they cannot have a single clue why that is the case.
Additionally, WebKit is released very late to the public (sometimes even after fixes have been rolled out to iOS)...and before that nobody from the outside can even know what's going on.
From a maintainability point of view things like this should be - in worst case - a Web Extension, but definitely not be inside a Browser source codebase.
Youtube is even worse once you dig through the Apple internal plugin replacement, which effectively implements a native C++ decoder for youtube streams on iOS because otherwise you could not watch videos there. This was done before as a WebCore plugin but has been moved around into the PluginProcess source-code wise over the last two months (and currently blocks me from merging in changes, so I have to backport fixes after I removed all legacy plugin APIs).
Both ATI/AMD and nVidia use all kinds of tricks to outcompete each other since gamers only look at the fps count. That means driver optimization for specific games, even dropping fidelity a bit when that means more fps. Also, nVidia has teams that take over engine code to 'help' software devs making the most of their hardware which gives them an opportunity to push specific approaches that don't work as well on their competitors cards.
"Nvidia Gimpworks" - it isn't said in the video, but: while these techniques are slow on nVidia, they are even slower on AMD. Especially gratuitous tessellation.
https://youtu.be/AUO3LEjWsw0
You would think that they would have at least have made it into a configuration file, instead of literally hard coding rules for specific website domains directly in the browser rendering engine C++ code?
But why would they make it a config file, when they can just add it to the code?
Config files are like code, but even more shit. You never get the full set of functionality that you get with code, the compiler doesn't check your work, code navigation can't help you, and there's no debugger.
It sounds kind of funny to say “this should be in a config, not in code, but here’s a config language that lets you code your config”.
Regardless, I don’t think Dhali was around at the time Quirks.cpp was created.
Funny story, my first exposure to ruby was using it to write ant (or was it maven) config files (EJB nightmares) because the XML config sucked pretty bad (around Rails 1.0 era). It was far more concise and easy to work with, being an actual language and English readable, but I was never sure it was a good idea in a large team environment. If Java hadn’t been so cumbersome I would have used it for config as well instead of the hot garbage XML config was (is still?). I’ve seen a bajillion attempts at config languages since then. Most not great.
A lot of times 2 variables and a `map` function is all I wanted and the config would have been pleasant. Getting type errors in my editor and being able to extend record defaults was the cherry on top.
That is odd. Yet a strict check against “hulu.com” is also included, and YouTube is checked for “.youtube.com” and “youtube.com”. Perhaps this fix was to prevent malicious behavior, so they included anyone who might be phishing or abusing the API in some way? I think someone would have to dig into the quirk itself to find out more.
Note that this is very likely due to everchanging CDN domain names. If they spawn servers on demand, they're probably as messed up as googlevideo.com (which also uses hashed subdomain names).
Microsoft took a somewhat similar approach with their “ Super Duper Secure Mode” in Edge. It disables the JIT in the V8 JavaScript engine to reduce the attack area of the browser:
// FIXME: Remove after the site is fixed, <rdar://problem/75792913>
bool Quirks::shouldHideSearchFieldResultsButton() const
if (topPrivatelyControlledDomain(m_document->topDocument().url().host().toString()).startsWith("google."))
return true;
Finally. I am so happy someone else noticed how much of a garbage fire Twitter is when you limit (not even block) JavaScript.
I use uMatrix and a hardened Firefox profile so I notice pretty quickly if websites are poorly made. Spoiler: the vast majority are, and the big players like Twitter, Facebook, Google, etc. are absolutely the worst offenders.
But Twitter is so frustrating because of exactly what the article's author says: they re-invent everything, including the text in the Tweet box. As soon as you start typing a tweet it gets converted into some horrific swamp of HTML elements which starts to break really fast if you limit JS as I do.
At some point the concept of "graceful degradation", which used to be taught as an essential part of high-quality web development, was totally abandoned.
I feel like I'm transitioning into "old man yells at cloud" more and more every day, but the web's going down the shitter real fast.
Speaking as a web developer, blocker of JavaScript by default, uMatrix user, and extensive user script writer: limiting is vastly harder to deal with than blocking; your “even” is unjust. “No JavaScript” is easy to detect and cope with (even if it’s just to respond “nah, dude; JavaScript me, bro”), but coping with blocking some JavaScript requires you to consider what could be blocked and what to do about it in each case.
If you’re talking third-party scripts, then sure, developers should handle failure to load scripts/XHR/fetch, simply for normal people’s sake; for analytics especially there are well-trodden paths, though certainly occasionally you encounter sites that don’t tread them. (Note that when I say “third party” I mean the social concept of third party, not “different origin”, because it’s common to load first-party scripts from different origins—CDNs, &c.)
If you’re blocking things at a finer grain than first-party/third-party divide, then you should expect things will often blow up. It’s not particularly reasonable for them to detect and work with your meddling.
A few years ago I had to use Dropbox from time to time. It was always fun whenever it dropped my session (never for any obvious reason) and forced me to log in again, as I had to tweak my whitelists almost every time as they’d have shifted their CAPTCHA thing around or added a new CAPTCHA service (they had three or four going at once!) or something. It would take literally twenty seconds after pressing the submit button before it would decide to actually let me in, even with no intercepted requests. Not sure what was going on with that. And if anything was blocked, then it’d probably just hang indefinitely, often even without any exception ever being thrown in the console! Definitely not a good experience, and they should detect that a necessary script failed to load and notify you, but eh, I reckon I brought it on myself by using fine-grained blocking.
I'm sorry, but the vast majority of websites handle umatrix and ublock plus assorted other privacy tools, just fucking fine, often when I haven't enabled any javascript beyond the same domain stuff.
Then there are the sites that are kinda/mostly broken until I enable a couple of obvious exceptions I should probably have universally enabled. That's fair.
Then there are the sites that it takes a minute or two of messing around to get to work..I have to whitelist 10+ different domains for them. That's irritating.
Then there are the sites that I can't seem to get working properly at all, and eventually give up and just open in a private mode tab in Safari, because I trust that combo the most.
Parent commenter was right: Twitter is a dumpster fire that throws a hissy fit at the drop of a hat, in way very few other sites do; it's almost like they do it purposefully out of spite.
It's a fucking glorified PLAIN text messenger. There is no excuse for the level of complexity and unreliability of the code they throw at browsers, from such a large organization with such supposed high-level engineering talent.
Twitter works fine with JS/XHR enabled in uMatrix for first party and *.twimg.com. It’d be better if they didn’t bring twimg.com into it, better still if they supported operation without client-side scripting, but I don’t observe the problems you’re reporting.
Imagine going into a board meeting with the senior leadership of any one of these companies and saying "we need a major initiative to make our site more accessible to 20 people who block all Javascript and probably have contempt for us." Then tell them how much it will cost.
This is exactly it - by limiting javascript you are choosing to be in a minority of users and then demanding to be catered to. You're just setting yourself up for disappointment.
These are businesses - they write code to generate profit. Their support of limited javascript has no impact on profit, so they won't do it.
Or as developers we be good stewards towards the web and the feature isn't complete until it's tested to at least not completely bug and give an error message when a script or resource isn't loaded.
If you get to unilaterally make technical decisions in your company with complete disregard for return on investment, then be my guest - but I'd look for other jobs because companies that prioritise developers complaining over profit tend to go bankrupt.
I'm not saying make it all work without JS. Provide a noscript tag and actually handle your error states. These are basic best practices. If you get fired for not swallowing errors, I don't know what to tell you.
But that's not what the article is complaining about - they're complaining that most of the UI functionality doesn't work without JS. To make it do so would require a progressive enhancement approach, which is not going to sell well if the only benefit is an extra 0.1% audience reach.
Then perhaps we're not at odds because full JS-disable support is often unreasonable. My gripe was that many of my own experiences using uMatrix with scripts disabled by default ('cause you never know what sort of modals and pop-ups and other junk random pages send you) and not getting even the slightest feedback on what went wrong. I'm aware that this is a minority setup, but throw a user a bone. I've been banned from signups because I failed a third-party CAPTCHA I never saw. I enable JS manually on almost every site once I get an error and it's vaguely trustworthy.
I'm not familiar with Windows but I'm going to assume the OS will at least give you an error. A lot of websites are just blank and/or swallowing errors without giving feedback.
Now that it's 2021, most of the concessions the author makes are possible with CSS. You can change the styling of an element using hyperlinks with the `target` pseudo selector:
This works for dropdowns, tooltips, modals, even navigation if you're navigating to known places (or if you use JS to pre-insert the destination into the DOM). And of course, you can animate the changes with pure CSS.
:target has been a possibility for many, many years, but is a poor choice for this sort of thing because it’s rather fragile (other things touch the fragment too) and messes with history.
The better hack that has been long available is invisible checkboxes, :checked and a sibling selector. That’s almost certainly what “you could fake it with CSS too” was describing back then.
But the proper solution now would be <details>. As an example of this, many dropdowns on GitHub work without JavaScript, by using <details>.
I would note that with all of these alternatives, you tend to want a little JavaScript to enhance the functionality, e.g. manage ARIA states and focus.
These sort of arrangements typically don't hold up when it comes to finer points of UX, like "clicking outside closes it" or "closes after x seconds", and so on. What if you wanted a keystroke shortcut to open the thing, or a keystroke shortcut to close it, the list goes on and on and on.
Product managers and designers don't care about reducing JavaScript, so that really has no weight against the things I've mentioned plus more. They want it to work the way they want it to work.
No one asks if Excel or Hearthstone or Git or other desktop applications have progressive enhancement, it's unfortunate but these days no one should be asking it about web pages either.
The web is an application platform, it stopped being about documents years ago. Whatever you may feel about Xforms, its spec was published nearly 20 years ago and even now HTML forms still cannot do something as basic as a PUT request without JS or workarounds.
It's clear that browser vendors expect you to use JS for basic functionality, and for document distribution you'll be using PDF anyway so why try to pigeonhole the web onto something it is not? What's to be gained by pretending otherwise?
> for document distribution you'll be using PDF anyway
This is true and it makes me really sad. HTML in particular, and the web broadly, is really good at document presentation and distribution. I see PDFs every single workday that should have been a tiny, readable, mobile-friendly-by-default HTML file.
Desktop GUIs were making great inroads until the push to put everything on the web arrived. HTML/CSS/Javascript was never meant to build these types of apps. That's why it is all popsicle sticks and glue. Javascript was almost toast until Google brought it back from the dead.
> No one asks if Excel or Hearthstone or Git or other desktop applications have progressive enhancement
I do. I stopped "upgrading" most applications years ago because they only became worse with each version. Excel probably peaked in 2003, arguably 2010. Anything beyond that is frustrating ribbon-UI garbage IMO.
Can you imagine how great it would be if nearly every graphical native app on your computer had a pure-text representation of its state that you could query/extend/manipulate? Or if you launched an app and it was missing a dependency and it launched anyway and just disabled a small amount of functionality related to that dependency? Some applications already kind of do this with their dependencies (Firefox will launch even if you don't have a GPU or video codecs installed), but it would be great if they did it even more.
The divide between applications and documents has always been a lie. Most (not all, but most) native apps are interactive documents, they are documents that change when you do something to them, and that are just styled to be displayed in 2D space instead of as pure-text trees. There are very few native app interfaces on my computer that couldn't render out to HTML or something similar. And it would be amazing if native apps were rendered out as pure-text XML/tree-based documents, that would be so useful. It would be even better if there were system-wide conventions for things like links/buttons inside of that tree that could be specified declaratively and that were user-inspectable. That would be a big improvement to accessibility and extensibility for application UIs. It would be like having CLIs for everything, but even better.
Applications are documents, and documents are often interactive. There is no actual distinction, it's just a gradient of how complicated the document is and how often it updates.
This is my pet-cause and at some point I'm going to finally snap, sit down, and write an extensive manifesto about it. Basically every single native app on my computer should work headlessly, should implement progressive enhancement, and should split out declarative representations of state that can be styled and intercepted by other programs.
Native app developers have gotten hung up on the bad parts of the web and the bad parts of HTML/CSS/Javascript (admittedly there are a lot), and it's caused them to completely miss the revolutionary idea that Unix already proposed all those years ago -- that most apps should output human-manipulable text formats that users can inspect, restyle, and pipe into other programs. HTML is the next evolution of that idea, and native developers are so hung-up on the details of the web that they've missed the broader picture and are stuck in the past in how they think about UX presentation. Even worse, they've started to push those outdated ideas onto the web; instead of taking a step back and trying to think about how to make more modular and more inspectable interfaces, instead of thinking about what a better implementation of HTML for native apps could look like, instead they argue that the web itself should do away with these concepts and just render out blobs of pixels compiled in WASM threads.
>here is a screenshot of a tweet, with all of the parts that do not work without JavaScript highlighted in red
And this is when I realized this article was written a long time ago. Now if you attempt read a text post on twitter you get nothing at all unless you execute javascript first. The entire screenshot would be red in 2021.
This blog loads 2mb of scripts, and if you disable JS you can't leave a comment or even read any comments. Personally, I don't have a problem with this, but I'd have expected a blog hosting an anti-js essay to be less dependent on JS. The author notes in the noscript block that using disqus is part of hosting a static blog, but that definition of "static" is an implementation detail - i.e. a convenience for the developer - but from a user's perspective the result is identical to rolling your own js comment system, the site doesn't work if you disable JS, thus calling into question the value of building a "static" website, especially in the context of an anti-js essay. The author could have instead supported non-js comments, but when the rubber hits the road they appeal to the same developer convenience and UX benefits that typical js developers espouse.
I think you misunderstood the entire point of the author, especially since you have called it an 'anti-js essay'.
From the article:
> Accept that sometimes, or for some people, your JavaScript will not work. Put some thought into what that means. Err on the side of basing your work on existing HTML mechanisms whenever you can
As you have observed, disabling JavaScript does not make the site stop working entirely for no reason. It degrades meaningfully, and the author put thought into what that means
Moreover, enabling JS does not break browser functionality you're used to.
> As you have observed, disabling JavaScript does not make the site stop working entirely for no reason.
Such is the case for most JS sites, the site typically doesn't break "entirely", but I consider being unable to post or even read comments to be a major functional breakdown. Comments are non-interactive text, there's no justifiable reason why I should need JS to read them.
> I consider being unable to post or even read comments to be a major functional breakdown.
For a blog? I disagree, because the point of a blog is to read the author’s comments, not those of third parties.
> Comments are non-interactive text, there's no justifiable reason why I should need JS to read them.
I kinda agree. The issue is that it is very rare to find a static site generator which can interface with a dynamic comment storage. To be honest, I don’t know that one exists, although of course it is completely possible. I can understand as a pragmatic matter why someone might have a static blog, want to add comments and not be willing to run his own dynamic comment system, esp. if that would entail writing said system.
All that being written, it would be awesome to have a static site generator which were called by a hook in a dynamic comment server. I actually would like to write one someday!
> For a blog? I disagree, because the point of a blog is to read the author’s comments, not those of third parties.
The author has posted several comments in the comment section, so even by your definition the blog fails to meet functional standards. However, IMO, the distinction between the author's comments and user comments is arbitrary, both are text meant to be read by readers, and those readers without JS are missing a ton of discussion that's happening in the comments, including discussion with the author. There's actually more to read in the comment section than in the blog post itself.
> All that being written, it would be awesome to have a static site generator which were called by a hook in a dynamic comment server. I actually would like to write one someday!
I have actually seen a system like this implemented at company I used to work for. A posted comment would be stored in a sqlite database before a request to rebuild the comment section is fired (as a partial, to avoid re-rendering the whole page/site), subsequently dumping the new build into the CDN (with some debounce logic to throttle high comment traffic). The rebuild was typically sub-second and performed very well.
Minor improvement; a rebuild queue. Workers scaled to platform capability (or just one). When a worker fires, scan the queue for other requests for the same content and use the freshest version / mark the future requests completed when the refresh finishes.
It does seem like it'd be neat of these embedded comment services to have a server side rendered page that just shows all the comments for a specific post, which could be linked to when JS is disabled. Yes, the link sends you off-site, but you'd still be able to read and probably even participate without JS. I'm sure there's at least one service that can do this.
This seems to be exactly how Gatsby et al work, by pulling in "dynamic" content over an API at build time. I've never seen this being used to re-render a site based on third-party changes like comments, but it's popular to pull data from a first-party system like a CMS containing e.g. the blog posts.
Though I wonder if the prevalence of programmable edge networks will leapfrog that before it becomes really mainstream, and allow site owners to add a small bit of edge dynamism to pull dynamic content in without client-side JS.
I think that it's a bit of a cop-out to call your site static in instances like this. You've outsourced computation to a 3rd party as well as the user's browser, as opposed to doing it on your own server. It would be dynamic if you cobbled a comment system together with some CGI scripts and an SQLite database and some server-side includes, and it's still dynamic if you abstract that into a runtime dependency on the client, because the content of the page is going to be different whenever you refresh and new comments are made.
The difference between the server-side option and the outsourced one is that the outsourced one is probably also going to soak up and resell a lot of data that you probably wouldn't have captured yourself.
A "static site" typically means to have the site being generated through a build step such that you can serve it straight from a CDN or simple file server. It doesn't say anything about the content included and it's "dynamism".
Indeed. I get a few emails a month from my site and I'm happy to read every one. Whereas I'm sure if I had an open comment section eventually adbots would discover it.
The comments on the blog are managed by Disqus. If he wanted to do them without JS, he'd need to build a commenting system all by himself - and that's almost impossible nowadays because of the spam issues.
Everything else on the site seems to uphold to his principles.
Impossible? Hardly. Comment screening, rate limits, captchas, and email verification are all great spam mitigation options. Besides, you can still get spammed using something like disqus, I see it all the time on high traffic news and entertainment sites that use disqus.
I feel like I sympathize with a lot of the various JavaScript rants in a way … but I’m not convinced that many of the blogs about it (that now feel like years old spam) are actually practicing what they preach or have ever waved the magic wand they want to exist.
I’ve yet to see a real guide from someone building an even moderately complex site and moving away from these terrible frameworks and “unnecessary JS”.
In the end most of these boil down to “I wish other people would build their sites the way I want them to” without much consideration of how / why a site is the way it is in the first place.
Yeah I'd like to see that too. One site I think that does a fantastic job is sourcehut. It actually has a little bit of JS, e.g. for the builds page to stream results, but it's fast and light, and measured to be fast.
For people who don't remember, the result is very similar to how Google looked ~15 years ago -- Google News, Froogle, search results, etc. The underlying tech was different, but the result is the same. Google just used C++ and the "google-ctemplate" language.
----
I wish that every food ordering app was written like this.
I mean all they are doing is displaying a list of pictures and then providing a checkout experience -- it's literally eBay from 1998, but 1000x slower.
It would also be like 10x faster than a native app on phone if it were written like that.
In fact Google has a lightning food ordering app right on the search results page that proves the point! However I tend not to use it because I don't think it's good to let the search provider "hijack" the traffic intended for restaurants. i.e. presumably the restaurant will put their preferred vendors on their sites, which is almost never Google, and is instead some really slow and janky app :-(
+1 for sourcehut. The people there have been making great, lightweight software for a lot of stuff. Despite looking simple they are often really powerful.
Way back, Conventional Wisdom was "build the site so that people with JS turned off could still use it." This was great advice in 2001, but it's pretty hard to do now. You have to choose what is, and what is not, convenient and/or appropriate for the end user. And that's pretty hard to determine, as there are more varieties of "end users" than atoms in the universe.
Plus, even if you're really careful and make something that is judicious with JS, phone users come in and blow everything up. For example, leaving a review could be done JS free buy opening a dedicated review view, and if the end user needs to refer back to the product, they can open a new tab. It's not so easy with a phone, unless they have really strong phone browser kung fu.
I would have thought that by now we would have settled on broad conventions for most things so nobody would be inventing all new ways of doing the basics. Instead is seems like things are proliferating. If there are any standards, they're top-down things from Big Corporations like Twitter or Google, because they have the muscle to force everybody to use their conventions and like it.
Both websites could easily be done without JavaScript. I have built a blog and a site similar (without voting) to this using PHP/HTML/CSS when I was young.
HN works nicely without JS. With JS enabled it doesn't need to reload the page when you vote, so it's not necessary but still useful.
The problem are most other sites that often don't even load or show any content without JS support, or break in many creative ways. Although for weirdness you only need to disable custom fonts in the browser, and suddenly Reddit's notification button shows the Paypal logo and many websites have random chinese symbols scattered around the UI. And Google Translate looks like you need a translator to use it.
Product Designer: "No system controls; we want our users to have the Full Brand Experience, therefore, custom controls."
Also Product Designer: "Why don't our custom controls work as well as system controls?"
Wow, that's a sobering read, thanks! It's funny to see how some of the common gripes with modern "flat" design was so obvious even before it became mainstream. Even though it (in my opinion) looks better today, the skeumorphic, brutal, beveled Windows 95 UI was probably the most obvious and clear interface I've ever used.
Except for when those custom controls fail to implement all the features of the built-in ones, as the article details from various popular sites, at least back on 2016. But maybe at the end of 2021, all custom stuff is fully accessible and reimplements everything.
As a product designer, I am and have long been looking for a company that will enable—hell, allow—me to design software that employs system standards. I can’t imagine I’m the only one.
Problem is, most leadership doesn’t care about usability unless it affects marketability, which is less and less likely as people (users) become more comfortable with and accustomed to hopping between needlessly proprietary workflows and abstraction of crucial processes.* Often (usually?) designers also don’t, to be sure, but those like myself who do care are, like devs, up against unreasonable speed demands and interruptions that make it difficult to do our best, most thoughtful work. And though it may seem like design is higher up the totem pole, perhaps because it’s usually upriver from dev, in my experience, most companies see designers as whiny graphics monkeys, just as they may see devs as whiny code monkeys.+
It’s significantly more difficult and time-consuming to design novel functions and flows that conform to or complement existing standards than it is to draft a new system that need only make internal sense and may wantonly disregard the environment around it. This is why people like Electron, right? This is why people liked the internal combustion engine.
* I’d like to see a study on how this kind of acontextualization of work tasks might exacerbate burnout.
+ How many companies are actually defined by great design these days (versus, say, great functionality)? Not even Apple seems to care about that anymore, as they seem eager outpace their software with their hardware, rather than creating any iconic or lasting designs. As “product design” has evolved to include software products, the discipline and its output have become just as mutable, and therefor disposable, and therefor ‘not worth’ the effort necessary to make good things, let alone great things.
I mean the product manager isn’t wrong, there is no good reason we shouldn’t be able to make custom components with the behavior of “native” widgets. The fact that the dev story is “build a widget from scratch out of divs” and not “extend the fully functional component with new styling and hooks and slight behavior modifications.”
We created this problem ourselves by not having the tools to meet designer needs while doing it right. Pontificating about how they should just change their notion of right misses it completely.
Lack of technical solutions isn't the only issue. System controls (well most of them - recent Windows and macOS seem to be regressing) have decades of refinement and battle-testing behind them and users are familiar with them. You're not replicating all of this alone regardless of how much money or years of experience you have.
Consistency is overrated. Context is what matters. And context often requires customized controls.
We will never live in a world where an abstract concept like a checkbox can have an 'assigned' specific visual affordance that doesn't allow for adjustments. Such thinking is stuck in the past and won't allow for new/better UI paradigms.
This purist talk about how everyone should stick to some system standard or whatever really needs to die. It's not working. Nobody is doing it.
Everyone, literally everyone is running their own thing. Name me big company that doesn't roll their own design system.
Preaching purism really doesn't help anyone, let's just accept reality as it is, and stop chasing some utopian dreams that just won't ever materialize (and if only because it would make a lot of jobs and professions useless, overnight, and there's too much inertia for that to happen).
I’ll keep complaining about it as my parents age and struggle to use their tv apps to watch shows because every product manager needs to prove that they can reinvent the search interface.
Let's take windows. The scroll bar on the start-up screens looks different to the one in explorer, looks different to the one in Chrome, which looks different to the one in IE11, which looks different to the one in IE Edge, which looks different to the one in Firefox, which looks different to the one in excel, which looks different to the one in the control panel, which looks different to the one in visual studio, which looks different to the one in Visual Studio Code.
So if they're _exactly_ getting the point, they're pretty unobservant.
We’re saying the same thing. The web’s current tools requires devs to reimplement all that functionality from scratch, which isn’t gonna happen, when the ideal solution would be to start with the battle tested native control and then tweak from there to fit the design.
But, once you tweak it, it is no longer the same "battle tested" thing. Even little tweaks. App developers should do themselves a favor and just stick to the standard controls that have decades, maybe centuries of tester-time working out the kinks and edge cases. A medium-size company's UX expert's "restyling tweaks" are unlikely to make the control better.
This position doesn't make sense to me. Certain customizations to the native control (which is the ask here) are already available, such as adding padding to input controls, increasing the font size, changing the width. You can even change the border color and the border radius and the background color for some controls! Some of these turn out to be necessary functional changes, and some of them -- imma say it -- are obvious and important adjustments so that your app looks consistent and everything lines up properly.
To think that there's no room for variation in the design of form controls, given the broader context of a design system for app, is just a lack of imagination. I guarantee you that the people who design the operating system form controls (which change every couple years) have all sorts of ideas, and they end putting out one possible, relatively minimal option from many good designs they considered.
Please have a look at any OOP desktop framework from like the past 3 decades. It is not a difficult concept — one is free to overload the render function, while the functionality will remain the very same, like getting focus with tab, activate to space, whatever.
My point is that by tweaking the looks of the existing thing you are breaking standards users are used to or might be causing issues you're not even aware of (bad color choices for colorblind people for example) even if the behavior/functionality of the control is unchanged.
I think this is an important concern, but not a disqualifying one. There is already a lot of variety in the space of controls, for example compare browser controls to the Office Suite to native OS, it's a big spectrum. When controls are customized correctly they become more usable within the context of the app because they are sized consistently, line up with the content grid, follow visual cues from the rest of the app, etc.
I think following your argument out to its logical end would have everyone building apps that look like native OS interfaces, which are intentionally minimal and (more importantly) just a tiny speck on the enormous space of beautiful design possibilities.
Maybe you can introduce smaller changes over a longer period of time? I dunno. Would not Windows users be stuck with that very old look if there were no changes? I mean, is this actually what you are favoring? That said, I cannot stand the trend of UI/UX on desktop being so touch-friendly, along with more padding and/or increased font sizes and less content, too.
It's not just appearance but also behavior. There are differences across browsers, OSes, and devices in how UI widgets work and inevitably the "custom" approach is a compromise in favor of the PM's personal workstation.
And lest it be overlooked, the appearance of those widgets themselves, is a feature. They are easily identifiable by users of that platform as being something they can interact with and expect a predictable set of outcomes.
The problem is, as always when talking about cross-platform development, is that you can have consistent widgets OR native widgets, because the native platforms have different look, feel, and interaction standards.
Would it be great if everyone on the frontend could get together and agree what the native-to-browser widgets should look like and how they might be customized? Yes. Would the three warring browser manufacturers change their platforms to make web development easier and lighter for everyone, at the cost of platform uniqueness? No.
> Would it be great if everyone on the frontend could get together and agree what the native-to-browser widgets should look like and how they might be customized? Yes.
This would not be great at all. I want widgets on web pages to look like macOS system widgets on macOS, and to look like Windows system widgets on Windows. I want designers to lose the notion that it is their system style, and respect the fact that the browser is _my_ user agent, and therefore must respect _my_ preferences.
No, it’s because it’s stupid difficult to replicate the native functionality exactly, retain accessibility traits, and if you get it perfect, then you have to deal with the fact that different browsers and devices interpret these controls totally differently.
We’re saying the same thing. The web right now today requires that you reimplement all that functionality from scratch when it should be just inheriting all the behavior from the native control in your custom thing and tweaking the styles or adding some additional functionality.
> Pontificating about how they should just change their notion of right misses it completely.
The ask isn't to change their notion of right, but to understand that (yes, sadly) there is a tradeoff between "usable but ugly system components" and "pretty but janky" bespoke components, and they're making the wrong choice.
If what feels like the vast majority of them are making the same "wrong" choice, we should probably think about ways to make that choice more "right", because clearly, the current approach isn't working
This is my approach. To me the ask for custom components from designers is extremely reasonable. The fact that it’s so hard to get all the edge cases right on the web is a failing of the tooling not the ask.
At some point we have to climb out of the trenches and introspect why the most treaded path in web design is the one with the rickety bridge and spike pits when we the software engineers are the ones who lay the bricks.
It’s not a failing of the tooling. It’s a failing of designers and product to understand what a web page is and what an application is.
If youve only designed/“Ideated” for iOS or Android, you have no business designing for web. Entirely different platform with different capabilities.
Embrace reality and build simple effective products. Literally no one gives a shit about the arrows on your carousel or the border radius animation on a button. I just want to scroll and click and open new tabs.
well it's historically been because 'extend the native widget' isn't possible with css. WebComponents change that a little bit with well-scoped css, but that's still a very javascripty solution.
Of course, I’m not at all advocating that the solution is to just do it but that web really hasn’t kept up with the needs of the kinda of apps people want to build with it. It’s easy to build something that works but annoying to impossible to build something robust so we’re left with every single website being functional enough to work but broken.
I don’t think telling developers “just do it right” is a scalable solution, the platform has to make the least effort path the one that works best.
I recently came across a web site that allows one to zoom into the content (it's a mapping site). Only the designers just assumed that everyone would have either a mouse with a scroll wheel, or a touch pad. Well, I have neither. And the developers of the site suggested "oh, just install this piece of software that allows you to bind some keyboard shortcuts to simulate a scroll wheel" instead of, you know, providing two damn buttons to zoom in and out! ("But ... but ... but ... adding those two buttons would destroy the aesthetic of the site!")
To be fair to "the system", it was developed over decades, much of it before design was even a consideration, and almost all of it from a time when modern designs would have been a pipe dream on given device resources.
I know this is likely to be controversial, but JS improves the user experience a lot, both in terms of interaction and speed, as well as making development more manageable (if used correctly), alas at the expense of annoying purists who would prefer to enagage in all sorts of CSS/HTML gymnastics just to avoid using JS (and other kinds that would prefer vanilla JS to frameworks). Do that in a large project and you'll quickly realise how unmanageable it is let alone, as the author says, "iffy".
> I think the Web is great, I think interactive dynamic stuff is great, and I think the progress we’ve made in the last decade is great. I also think it’s great that the Web is and always has been inherently customizable by users, and that I can use an extension that lets me decide ahead of time what an arbitrary site can run on my computer.
> What’s less great is a team of highly-paid and highly-skilled people all using Chrome on a recent Mac Pro, developing in an office half a mile from almost every server they hit, then turning around and scoffing at people who don’t have exactly the same setup.
And later on:
> I’m not saying that genuine web apps like Google Maps shouldn’t exist — although even Google Maps had a script-free fallback for many years, until the current WebGL version! I’m saying that something has gone very wrong when basic features that already work in plain HTML suddenly no longer work without JavaScript. 40MB of JavaScript, in fact, according to about:memory
Doesn't seem to be a JS specific criticism, testing application performance on typical user hardware is a key responsibility of QA regardless of the language or platform.
This is a really common problem in tech. Smaller companies adopt large company strategies that require more resources than the small company has to do correctly.
Sure it can, but is that the way these companies are using it? Are they using Javascript to improve experience and development speed? If a Twitter textbox is lagging while someone types, then that's a degradation in user experience, and a pretty fundamental one. I like autocomplete too, but I also like my text box not to lag, and maybe there's a middle ground?
I wouldn't say I'm a "purist" about this stuff, but I do feel like at least scripting should make the site feel better, not worse. A lot of these sites are really unpleasant to use, Youtube swapped over to this single-page model that I'm sure was very difficult to build, but sometimes it just breaks navigation for me. What fixes it is I reload the page. So it feels like, to me, even as someone who has Javascript enabled on Youtube, we have gone from an experience where navigation never broke for me on Youtube, to a situation where sometimes Youtube loses track of the fact I'm online and I need to refresh the page, or I hit the back button and it reloads the same video. That's a lot of extra Javascript for an experience that feels like a step backwards. The old Youtube designs all used Javascript, but they also had working back buttons and felt just generally snappier.
I went on Google maps recently and tried to quickly copy a number of website links for businesses into a text file: right-click copy link doesn't work. I can't copy a link for an external website, Google Maps is no longer using hrefs for literally the primary thing they were designed for.
And for what? It's not faster for me to open the external websites, they don't load quicker. I don't get URL previews or some crud. It's a link that doesn't work as well, all that's happened is there's a click handler that has fewer features than the link used to have, and maybe it's easier for Google to get a pingback? This isn't an improvement.
Since when do large companies do anything that isn't user-hostile and abusive? Of course FAANGs are trying to keep their walled garden more walled by reducing outgoing links, and Twitter's incompetence is legendary
The real question should be if this type of thing is improving the web at large or the capabilities we can develop "efficiently" on it, and while I would be willing to hear arguments either way, I can tell you I use JS/HTML/CSS in combination to introduce capabilities that would not be palatable without the JS component of that, putting aside whether I would be able to develop them as a bunch of standalone capabilities. Model editors, graph layouts, plugin architectures; we can leverage client machines to do more and more, and in a business setting delivering internal tools this is a great method of reducing costs across the stack - the laptops were already going to be purchased.
YouTube is super slow for me because of all the JavaScript. Plus I have to refresh because whenever I click on a link, it just takes too long to load, so I click refresh, at which point YouTube says I lost my Internet connection, which I did not. Sometimes I ended up having with some video with comments from the previous one. That, and I hate the "more space, less content" trend.
> Are they using Javascript to improve experience and development speed?
Some, yes. This was in the spec sheet for a large ecommerce website I helped rebuild with modern(er) technologies.
> If a Twitter textbox is lagging while someone types
That's an implementation issue.
re Youtube, I haven't had the experience you describe, but having had to develop the framework side of SPA navigation into something robust and useable, keeping the current page on screen after the a new url was history.push'ed while preloading the data necessary for the next render and managing the scroll position, I can confirm it is a pain. I expect Youtube has the right kind of resources for the task but it happens, probably at a higher rate of bug incidence than regular frontend button-broke-the-site issues.
Implementation issues are going to happen whether you use JS or not, but I find building with JS and with modern frameworks a joy (although a lot of things can be improved) compared to server side alternatives. As with everything though, this is a matter of preference.
But what’s the whole point of rewriting the URL and rerendering in JS the whole page? I’m fairly sure that browsers are really great at caching, so no additional HTML, JS should be downloaded and the change might very well be faster with better history and UX if we drop SPAs.
The overall experience feels (and is) faster as you only need to update the DOM, not recreate it from scratch, so you're not rerendering the whole page. There's also less processing required on the server as you're only requesting the piece of data you know will change and the response comes back faster. The history API makes going back and forth seamless, as if you were browsing the "classical" way. But this comes with additional complexity which is sometimes not dealt with correctly.
Perhaps it is just poorly written, or maybe it is the animations that are making it look like it is slower than it should be. I created a simple website where I update parts of the DOM, and it is super fast. I have the opposite experience on known websites.
Totally agree. I implemented a page with tons of posts on it that will hide the posts before the provided cursor in the url and I simply show hide them with display: none. Even if it’s 100 posts hidden, the images are lazy and clicking “show” unhides all of them immediately. If new posts come in we simply append to the top of the parent container with basic DOM apis.
The app were replacing with this page is heavy react and there’s a long visual stutter during the show hide because it has to render them all. Not simply show/hide. They actually load more bits to lazy load the posts in JS than to simply send the html and hide/show.
Right, but like I said, I'm not a purist about this, I just think that maybe SPAs should only be rolled out if the implementation actually works.
To circle back around to Google Maps, this is an app that had working links, and the introduction of more Javascript made them worse. So I'm not saying get rid of all Javascript, I'm saying that if you swap out `<a>` tags and it results in broken browser functionality, that's a scenario where just using native tags would result in a better experience.
Twitter/Youtube/Maps are choosing to go down this route even though the implementation is worse. If they didn't have implementation bugs and navigation didn't break on sites like Youtube, and I could copy and paste links from Google Maps, and I didn't have laggy text input on Twitter -- well, then I wouldn't be complaining about any of this.
It's not really about Javascript being bad, but we can both agree that if a company tries to rewrite a website as an SPA and it results in a more buggy experience, then they should hold off on that and they shouldn't just plow forward with the implementation anyway. That has nothing to do with SPAs in general being evil, but even the act of saying "a Javascript-heavy SPA solution should only be pushed if the experience is better than the normal one" feels like I'm saying something revolutionary, and I don't really get what would be controversial about this opinion. Only push a Javascript heavy solution if your solution works, if it results in more breakage than the site had before that solution, then the engineers are moving in the wrong direction.
I don't know if this is a controversial thing to say, but I would say, "only push an SPA if you don't have a bunch of implementation issues". And I think that's where sites like Youtube and Twitter are falling over, in those cases their implementations are either broken because of tracking or just complexity, or something -- but it would have been better for them specifically to have not turned their sites into SPAs, because they can't seem to get rid of the implementation issues and their older sites didn't have as many showstopping bugs or performance issues.
I agree with you that sometimes this kind of partial updating can result in great, quick apps. But in practice, Nitter is faster Twitter is for reading tweets. So if I go to Twitter and I'm not signed in, and I'm just reading Tweets, that's an experience that Nitter is just flat-out better at providing in a more performant way. So it's not that all SPAs are evil or that they should always be avoided, but clearly in Twitter's case the introduction of an SPA didn't make the site faster or better, the low-JS solution ends up outperforming it on multiple key metrics.
On this site, for example, if up vote your comment.. the status will just be updated. Without JS, it causes the page to reload, and I will briefly lose my place.
Since you mention autocomplete, dealing with forms is a lot better from both a development perspective and UX, ie complex forms being validated client side (as well as server side, of course) and getting instant feedback. Modals. Having slower things being loaded after the intial page load resulting in the page loading faster (moreso with session info being loaded less often in SPAs). Infinite scroll for search pages (not the Instagram variety). Basically all "modern" web UI.
Maybe it's a niche perspective, but as a blind person the obsession with JS forms is what makes the web borderline unusable for us. While it's not impossible to make an accessible experience with JS, people mess it up so often that it may as well be.
I honestly think the problem with JS reimplementations is that develoeprs assume that they are their audience, or that people like them are the only people that matter.
While I understand the concern, I'm not really sure I buy the argument "JS makes accessibility bad".
It is for sure easier to do things wrong, but if you check at most of the major libraries for front-end (drag-and-drop, routing, dropdowns...), accessibility is built-in, and a critical selling point (e.g react-router, downshift...).
I think the proportion of front-end developers knowing about accessibility is just low, and the result is more visible for JS-heavy websites/webapps, but this is imho an education problem, not an ecosystem issue.
Having worked in agencies, accessibility was always treated as a second-class citizen (by clients or managers, not by developers, trying to push for it), and clients would often say "let's go live without it", then would come back to us asking to finish the job once they saw their competitors got sued for having an inaccessible website.
So JS may be a catalyst, but not the root of the problem. It's our job to push for the importance of it, as we pushed for responsive websites a while ago.
The reality is that the education you're talking about is never going to happen. By the time you had 80% of devs knowing how to do a11y on JS framework #271, a whole new paradigm would have come in. It's because accessibility is not a priority that accessible defaults, which almost definitionally need to be system- or browser-based, are so important.
If you make a form with HTML and style it with CSS, then you're 85% of the way there with accessibility, and chances are it will be usable if you screw the rest up. With JS, even if you're working from a checklist, you're much more likely to get somethign wrong. And then there are regressions. I kind of believe
that you know what you're doing, because the kinds of people that hang out on HN often do. But will your second-generation successor, four years from now, know how to update your work without breaking accessibility? Empirically, based on the low level of accessibility on the web (improving, but still pretty tough going), I'd say "no."
I think sites that fail the accessibility test should be shamed into compliance. Possibly like how they handle sites that aren't https. I can just imagine how frustrating it must be to have an impairment that hinders usage of a website. Especially if it might be something essential.
It's mostly fine for me, because I'm technical and can do some weird thing or use OCR tools I made myself or whatever. It's incredibly difficult for the average blind person, who is statistically likely also to be older as well.
Agree on all counts except instant loading, you can still do server-side rendering (from the same codebase as the client) and have the client library bootstrap/hydrate its state from it.
I mostly dislike infinite scroll and am irritated when I see visual things load after the page loads. I also don’t see the point in SPAs for regular websites. The modern web looks better, but sometimes the experience is worse, at least for those of us who were around before. Of course by this I don’t mean actual applications or games on the web. Just standard websites.
Modals can be done with CSS pretty easily (with either :checked and a label to activate the modal or :target and a link to activate it or with details/summary etc.), so can form validation (the required and pattern attributes), and lazy loading (add the loading="lazy" attribute for images and iframes). Infinite scroll can't really be implemented easily but in theory it's possible with server support (I'll make a demo and get back to you).
HTML mail is a mess that's impossible to get right; better to avoid it. Misleading users with a rich-text editor that looks like a word processor is bad because it'll pretty much never look as intended in anything but the same HTML editor that created it originally.
Annoying when pasting formatted text. Annoying when you try to delete a newline but it deletes an inline image. Sometimes tricky to add unformatted text after a formatted section (the editor assumes I want to continue the formatted section).
You could have used the same argument with flash. It looked and ran the same everywhere, offered features totally unavailable with HTML/CSS and Js.
The point being is that it might work fine on your machine, but its not fine for the rest of us. I am lucky that I have a 2013 retina with a GPU, but even still, there are more and more websites that are slow as shit, for no real reason.
when I'm out and about, small website mean faster loading. this means that more people can use them without tearing their hair out.
I've seen server-side rendered websites that took several seconds to return a page, while not being under any particular load. How is that different?
Of course a static html file will load instantly and likely not cause any issues, but apples to apples would mean comparing dynamic server side rendered websites to dynamic client (or hybrid) websites. In both cases, inexperienced developers can make a mess of it, including by picking the wrong kind of tools to do the job.
I think JS gets part of its bad reputation because of the ads (and ad networks) it empowered and the bad practices they turned into status quo (reflows, cpu usage, to say nothing of in-your-face overlays and that kind of stuff).
Slow loading server rendered would also be a slow loading API. At least with a slow server rendered app I as a user (not a developer) knows something is happening. A fully formed client with a slow backend can give the wrong impression that the app is working (unless the developer reimplements "loading" feedback that the browser implements natively)
> JS improves the user experience a lot, both in terms of interaction and speed, as well as making development more manageable
Almost. Most JS (caveat Google Docs etc.) creates the kind of experience users have come to expect. That's orthogonal to "improving" the user experience. Because of the keeping-up-with-the-joneses effects of frontend engineering and design patterns as spearheaded by giant, framework-authoring, standards-setting companies, there are strong incentives to converge on bad outcomes. For example, consider the common trend of replacing built-in controls with equivalents made out of "div soup" for the sake of providing consistent styling. This is done because it is thought that styling controls is a table-stakes necessity for interactive websites. Many studies indicate this, and many users request/demand it.
It's also misguided.
That's not to say that the studies--or the users--are wrong. But "this is how it is done and this is how we want it" is a norm, not some objectively superior optimum, and it's a norm that was created by many years of development. Some of that development was necessary, and advanced the state of the art. But much of it was fad/hype-driven, poorly thought-through, and slipshod. The end result is a bunch of tools/patterns/user expectations that bear little relation to what actually improves user experience.
In other words, JS often only improves things if you grant that the currently accepted definition of "improved" is actually an improvement over using other tools; often, it is not.
> at the expense of annoying purists
In other words, engineers. Engineers more interested in delivering quality than complying with norms or maximizing marginal click revenue by adding the 157th tracking library to a 50MB bundle.
Saying JS improves the experience in terms of speed is too general. Doing async requests can result in faster interactions than whole page reloads, but big JS blobs that offload work to the client can make initial page load take forever on small pipes and can make weak CPUs struggle under the weight. More CPU utilization also drains battery, which does harm UX. Ignoring low bandwidth and underpowered hardware is fine for some products, but a common mistake with many others. People with slow internet and cheap devices are people too, ya know?
What are some examples of JS improving the UX in terms of speed compared with pure HTML+CSS? Are there examples that are more than submitting & updating data without the wastefulness of a full-page refresh?
Heavy JS is a bit like junk food. It's fine to have a bit at the county fair and the fact that you can build a successful commercial business selling it isn't terribly surprising, but there are pretty obvious social reasons we shouldn't load it up in every meal consumers eat.
That's what articles like this are pointing out: this well-intentioned thing we're doing has negative consequences. Let's do it more moderately and deliberately to mitigate the side effects.
You see - applications. I am also developing heavy JavaScript applications where people manipulate or edit / collaborate on the content in many ways.
For me problem is with pages/documents where I simply want to read stuff, maybe have some basic filtering options and that does not require loads of JS.
Like webshops where I am customer probably don't need 80% of JS that they drop in there for browsing just products.
I understand they need JS heavy content editor side where people add/remove/modify products.
Is it you that built Epic Games store homepage, that has appealing performances on mobile because it can't be bothered to display a list of games without heavy use of Javascript? That's what "successful JavaScript heavy applications" also leads to, developers never testing these on $50 android handsets and not caring about performance the slightest.
I care! As in the article - I really appreciate the benefits of JS when it's used in useful, tactful ways. But re-implementing a LINK, or a TEXTBOX, or a SCROLLBAR? That's super-duper irritating, and a horrible UX. I hate it. And I hate loading 1GB of JS just so websites can monitor what I'm doing and serve me targeted ads. It really sucks.
I think there are a fair number of JavaScript devs who really don't appreciate being told that they are on the same ethical plane as, e.g. cigarette designers, which is pretty understandable even if one does think that JavaScript is basically cancer (a position I would not take myself: I think it's more akin to sugar: good or at least neutral in very small amounts). No-one wants to be told that the way he earns a living is fundamentally wrong.
Or, in the words of Upton Sinclair, 'It is difficult to get a man to understand something, when his salary depends on his not understanding it.'
Huh, I'd given up on disabling JS because reddit required it to reply to posts, and I'd assumed HN does too. Turns out HN doesn't. Thanks for providing the impetus to check (I'd expected to say something like "well the site doesn't work without JS, so I guess they didn't stick around" but wanted to check first), I deleted my reddit account a while ago and it turns out that was the only thing keeping me tied to JS.
I really am not a fan of JS for all of the reasons that have been retread a million times over the years, but honestly the war is long lost and I don't really see the need to annoy people about it anymore.
They kind of are hanging out on HN, given that a story in such a spirit is being upvoted into the front page once a week or two, but they also seem a bit bipolar in that they swing between wanting to use the internet ascetically and defending the "web is your new OS and TV and it's a good thing!" attitude because they do it for a living, and that 150k is a 150k after all.
Different people have different opinions, let's make 2022 the year we move away from grouping vaguely related people then calling the group inconsistent
I've also not seen any recently aside from the one we're in (published 2016), what was the most recent one you recall seeing outside this?
E: just realising this comes off a bit snarky after the group quip, apologies. Asking out of legitimate interest not as a gotcha or anything.
At this point I've just given up worrying about it thinking it's an outdated opinion and I've not seen much recently to inform it in any direction other than towards JS support being expected
HN is also my procrastination tool of choice, so it might be I can see articles and manifestos urging for the return to the glory of blogosphere and web 1.0 before they sink down into oblivion.
It's like you may grow an impression that on HN, there are always endless debates about notetaking in software and paper. Sure they are there. There's good stuff, too, at least every other day.
A very good and recent website for learning about Swift do not have a single line of JS :) https://www.swiftbysundell.com
Some people still do care! Admittedly not a lot though.
We’ve been migrating our React front end code to Phoenix’s LiveView for any client-side code that requires server interactions, and use AlpineJS for client code that doesn’t. So far, we’ve been really happy with it.
This is a big reason I’m really enjoying using Remix.
They by default aim to make building JS less apps super easy through their framework. And of course you can progressively enhance your app with JS if available.
Maybe developers need to start testing their javascript heavy webpages on low end phones and computers to realize the terrible impact of javascript heavy webpages and how it's not very "inclusive" since it discriminates against the people who can't afford high end phones or PC? Especially when most administrative tasks now rely on filling forms on the internet, mobile or desktop.
Correct. And the article doesn't say so either. It just says that using it in pointless, negative-value ways is bad - and there's all too much of that. If you need a link, use <A ...>, for example, not a custom JS handler.
I tried a bunch of different browsers today on a lark, because Firefox has become the monstrosity it was once created to replace, and new Linux distros just keep getting slower and slower on this aging Core i7-6500U laptop. The browsers included Edge, Epiphany, Vivaldi, Opera, Otter, Palemoon, Netsurf, Dillo.
Let me tell you, it was a gas. So many broken pages. Such a wide range of performance and "user experience". Browsing with Dillo was like looking at the internet through an abstract art painting, which was even more striking because I still remember when I could actually browse most websites with Dillo, and now links2 -g (remember how Links has graphics mode?) is literally more usable for browsing the web.
Turns out Epiphany is extremely minimal, fast, and renders pages well (and even supports Firefox Sync? that's cool), so I'm gonna try that as a daily driver.
I think this has come up on HN enough that people probably already know about these alternatives, but I really recommend using a Nitter front-end to browse Twitter if you don't actively maintain a Twitter account.
It doesn't help if like Eevee you do have an account and are actually tweeting, but if all you're doing is browsing then it runs completely Javascript-free.
----
In regards to the actual content of the article, it's an eye-opening experience to dig into how a lot of these pages are constructed and to realize that sites like Google search, Facebook, Twitter are not really designed around having the simplest most performant implementations.
There is some dark eldritch magic that happens behind the scenes on a lot of these sites that keeps them working with accessibility readers and nothing else: trying to make it harder to find hrefs, trying to get rid of right-click open in new tab commands, trying to make it impossible for browser extensions to identify parts of the HTML. And I think it's really easy at first to say, "this is a complicated problem we don't understand, of course this must be a super-performant hyper-optimized way of doing everything." But it just becomes harder and harder to justify that over time. The link wrapping that Google search does really doesn't have anything to do with performance, it's designed to close holes around how people open links without sending pingbacks to Google servers.
There is a real incentive battle between tracking/control and simplicity/performance/flexibility, and the performance side doesn't always win, it seems that big companies are actually willing to introduce a huge amount of engineering complexity for an outcome that's almost as good as a normal href but that allows them to accomplish other goals as well.
The level of complexity in these sites sometimes gets used as an excuse to avoid criticism, but I think that sometimes the complexity is there purely because sites are actually fighting with browser technology; Google's setup for link wrapping is very complicated, Twitter's text entry is very complicated, because they're trying to take control of a process that they don't normally have control over. So they build very complicated houses of cards that fall over in weird situations, and the complexity of the house of cards becomes the defense against criticism that the house of cards keeps falling over.
I look at projects like Nitter; Nitter is not a full Twitter replacement, but in regards to the parts of Twitter that it does replace, it's better engineered. It's less complicated, the engineering is less impressive, but impressive engineering is not the same thing as good engineering. For the features it's replicating, Nitter works better than the real thing, and I think the reason is that it's less complicated and that it's fighting less with the browser.
Server-side rendering is something that has been tragically overlooked by a lot of teams.
With no amount of client-side javascript could you ever hope to build something that brings fresh business data to a customer's eyeballs faster than if the server simply delivers the final DOM on the initial page request.
Our Blazor web apps are running in server-side mode, and they are some of the fastest web applications that we use on any given day. Server-side doesnt necessarily always mean full page reloads upon interaction. Websockets are pretty cool.
Hackernews is the only other website that feels approximately as fast to me.
Is it scaling well? Do you have a lot of concurrent users?
I'm thinking o using blazor server in a production app, but the latency and higher costs may be a problem. Wasm is not ideal for me because of the high initial payload.
I am not sure what Javascript buys you, but from a point of view of doing a server-side app and having to do the frontend... uh, it adds a lot of complexity.
I am taking a look into htmx. It looks way closer to what html is and it is more than enough for my purpose at least.
> I can almost hear the Hacker News comments now, about what a luddite I am for not thinking five paragraphs of static text need to be infested with a thousand lines of script.
I'm sure you already know this (although I've had conversations with people that hate on TS in the past, that didn't, hence my post), but Typescript is JavaScript at runtime, there are no performance hits by using it in production (and negligible build times on save during development).
Shame it's so easy to get comments flagged and killed on HN. The parent comment wasn't inflammatory and (as we can see) contributed to relevant conversation.
Plenty of people still write JavaScript. TypeScript is rapidly gaining popularity, but I'd be surprised if it's reached even parity with plain JS usage yet. People using Angular are probably mostly using TypeScript, but there are still huge communities (React, Vue, Node.js) where TS is optional.
When quick prototyping i prefer plain JavaScript 100% over typescript. Lot less boiler plate than typescript. Faster, no compilation, can run it in a browser console.
For larger project I might consider typescript.
Vanilla JS is more rare these days. I interview 2-3 devs per week and always ask basic vanilla JS questions since it’s vital for our workflow that you don’t rely on frameworks we may or may not be using. Knowing react/vue is a bonus, but knowing how to do basic tasks like adding event listeners and manipulating the DOM in vanilla JS is required.
It’s anecdotal but I just wanted to say there are a few places that still haven’t fully committed to a framework.
I’d say at least 1 dev per week can’t add an event listener or manipulate styles in plain JavaScript. It’s getting to the point where I’m going to need to stop asking and just test in whatever framework they know. Then retrain when they come on board.
Not going to lie, I’d need to googled that nowadays if I wanted to do those things! Issue with polyglot development. Can’t you ask more fundamental questions like when to use an event listener instead, or best practices?
And this is why we need technical tests during interviews. There are people who call themselves "web developers", apparently been writing websites for 5+ years, but don't know about addEventListener.
Even better, make their direct supervisor take the same test. If they outscore them, they get their job and the supervisor then is demoted to the job being hired for.
Surprised to see the top comments here so doom-and-gloom: "ah it was posted in 2016, I don't think anyone (but do wish) cares about limiting their JS usage anymore "
The tools we have available today to optimize JS and build for progressive enhancement are light years ahead of 2016 because the smartest minds in the industry have optimized for it. The rise of batteries-included server-side-rendered React apps like Remix and Next massively cut down on time-to-first-paint and React 18 makes it easier than ever to defer rendering of async or interactive portions of the page (e.g. the Like count, the reply form) in favor of getting static content on the screen as fast as possible.
The most interesting part was the Quirks.cpp file [1] that contains literally randomized css classnames inside the web browser code because a major website was so shitty and spec violating in their implementations.
I mean, fixing a website in a browser source code...this shit got out of hand real quick yo.
The problem with all those drafts is that Google keeps doing their own thing, and others are forced to try to catch-up or implement the same bugs/quirks that chromium does. Everything is rushed so QUICly that even Microsoft gave up at some point. And at some point in the past google realized that they can own the web if they own the Browser. And that's what they effectively do now, because the competition isn't really a competition at all anymore.
[1] https://github.com/WebKit/WebKit/blob/main/Source/WebCore/pa...
[2] https://github.com/tholian-network/retrokit