I prefer the design of Firefox Reader Mode. That is-- do your thing, modern web. Once you finish I click a button to destroy your Rube Goldberg machine and retain only the tiny marble I wanted in the first place.
I only wish it could detect when asshole newspapers do that last minute mutation of the DOM to put bullshit divs in front of the article. It makes me play a button-click race to beat the rendering of the bullshit div to reader mode.
Btw-- why do asshole newspapers actually send the article content before adding the bullshit divs? If you're gonna be evil, just send the first paragraph over the wire and make it appear as if the rest of the article is just one email sign-up away.
Btw if you do this often look into the Auto Reader View addon. Then the common places you visit and can use reader view will be viewed in reader view immediately. The escape hatch works to which is nice. https://addons.mozilla.org/en-US/firefox/addon/auto-reader-v...
Safari has this built in and it's magnificent. I can't remember the last time I had to peer through a 20px high peephole at content on the Guardian website.
> I only wish it could detect when asshole newspapers do that last minute mutation of the DOM to put bullshit divs in front of the article. It makes me play a button-click race to beat the rendering of the bullshit div to reader mode.
More often than not, going into reader mode, then hitting refresh will fix this! (works for NYT)
Cloaking is showing different content, no? Not making it easier for Google to read the content than normal users. Lots of newspapers show Google the content for paywalled content.
Disabling javascript almost always works too - frequently where blocking elements doesn't, because the js does something fancy like delete the non-visible content elements.
> It makes me play a button-click race to beat the rendering of the bullshit div to reader mode.
Wouldn't the reader still work if they just put a div over the content? And couldn't you just delete the offending div in the DOM instead of racing against the Javascript? Or just disable the Javascript entirely and re-load?
1. Something renders that appears to be the entire article. Winning the FF Reader Mode race confirms it is indeed the entire article.
2. Bullshit Div appears blocking my access to content. First paragraph of said article is greyed and now ends with an ellipsis, or "To read the rest", or some such bullshit text.
3. If I lose the race, FF Reader only shows me that first paragraph.
Usually it fails to work when they actually remove most of the article from the DOM (e.g. "Subscribe to read the rest of the article!"), rather than when they cover it up.
To ignore one of the pillars of the modern web seems silly to me. I understand that a lot of sites have become bloated and resort to chunking in JS, but I disagree that JS is used as just seasoning on top of a website.
What about "content" that certainly is a document, but which just happens to not be a hypertext document? For example, a mind-map, or zoomable-level-of-detail timeline. Content that, on the modern web, you'd expect to be sent as a lightweight Javascript viewer app wrapped around the document itself. But where that's only required because the web-browser itself isn't already doing the job of attempting to render that document. Like it does with, say, SVG.
A lot of people think that "web browser" has some fundamental identity with "hypertext document viewer", but that's not what "the Web" is — the web is the set of resources you can request over HTTP; not the subset of those resources that are formatted in HTML. A "web browser" is fundamentally "a tool for letting you navigate to the resources at HTTP URLs" — essentially an address-bar with a "meta-renderer sandbox" attached.
(Remember RealPlayer, QuickTime, and other plugins you could install that would enable your web browser to "natively" render new content types? Those things are all also "the web"!)
And to put an even finer point on it: if you put a PDF, or an SVG image, at a URL; and that PDF or SVG image contains clickable links — then that document is a web hyperdocument. A web of PDFs is just as much a web as a web of HTML documents is! So not even "publishing hyperdocuments on the web" is necessarily coupled to HTML. (Tangent: remember VRML? It allowed clickable links, too. VRML was a hyperdocument format!)
(Are plaintext files that happen to include textual URLs, hyperdocuments? If so, then a gopher or FTP client would also technically be a "web browser." That's going a bit too far, I think.)
You are arguing a point I didn’t make. I think it would be great if we build standards for consuming media and content like mind maps and charts - I just don’t want to be forced to open up to running any arbitrary code to be able to consume that content. We’ve been lead into a false dichotomy.
My point was that — allowing that "the web" is just arbitrary 'stuff' at URLs, and that a "web browser" is a tool for viewing that arbitrary 'stuff' — there are always going to be novel non-standardized, proprietary document types residing at URLs, that people will nevertheless expect their web browser to display for them (and rightly so!)
How will users view these documents in a timely manner (i.e. not on the W3C's glacial schedule)? Whatever your answer, it's going to necessarily involve some kind of arbitrary code execution at some point. Whether that means Javascript; Java/ActiveX applets, or PPAPI (NaCl); Chrome Apps or other in-browser centrally-managed-lifecycle apps; installable NPAPI plugins like Flash; or COM component servers (as seen when e.g. IE/Edge is navigated to an .xlsx file URL.)
Presuming the user discovers they can't view document X, and therefore goes and hunts down tech Y and installs it to make X viewable, all such "tech Y" are equivalent in the sense that they're someone's arbitrary proprietary code, that the user is rolling the dice on hopefully allowing them to view X, while not doing anything other than allowing them to view X.
> allowing that "the web" is just arbitrary 'stuff' at URLs
It's not. The majority is just textual content I want to read. Possibly with a few images.
I don't need Javascript for that. And being forced to use Javascript for it is the exact problem.
For the minority (something clever that requires user interaction beyond reading text and looking at pictures, this "non-standarized", maybe even "proprietary" stuff) Javascript is just fine.
But don't force me to use Javascript for the rest.
The majority of web developers think the semantic web is a waste of time. (I don’t)
Pushing your point further, you’re saying that every new development that could be shareable over the web gets published as a standard and that browser developers build implementations of those standards, before the content is released. By the time that’s happened, your mind map will be out of date.
When anyone creates a novel representation someone will need to develop new code that renders it. Whether that goes into a native browser or a Java applet or a JS script, it’s still arbitrary code downloaded from a source you need to establish trust with.
You’re taking this too far. (And I admit my illustration smudges both your and my point).
The problem we really have is that advertising and SEO have corrupted the web to the point of unusability. If we (google mostly) hadn’t monetised individual tracking, we wouldn’t need the worst bits of what people do with javascript. Publishers are fighting for your attention because that creates surface for advertising. And advertising creates revenue. If they could sell content directly rather consumers wouldn’t get caught in an attention war and websites would be less intrusive and demanding.
Javascript itself isn’t the problem. I’m not convinced that scripted documents are the problem. The problem is surveillance capitalism and the attention ecconomy.
GP's critique is not that web browsers render SVG and PDF in addition to HTML. It's the outsized role of JavaScript specifically: our browsers execute arbitrary (possibly user-hostile) code while simultaneously trying to protect us from its effects.
The point I was trying—perhaps failing—to make, is that it doesn't make much sense to create a web browser that can render "web documents" but not "web apps", since the concept of a "web document" is basically "anything you can access over HTTP", and so a "web browser" — a tool for viewing arbitrary "web documents" — really doesn't work as a concept except by adding some mechanism to execute arbitrary code in the service of viewing those arbitrary documents.
That mechanism doesn't necessarily need to involve code being delivered directly from the same server the document is coming from, mind you. It could instead involve an out-of-band app store, where accessing a URL zero-installs a heavily-sandboxed plugin required to view it, and loads it into the browser. (Implementations of this: iOS App Clips; Sandstorm grains.) But those apps are usually still created by the same developer as the document anyway, so there's little difference between this out-of-band zero-install app approach, and having Turing-complete web documents. Either way, the developer of Boogle Maps can still mine bitcoin in the background while you have the map loaded.
The B2B productivity/collaboration app space — where you can expect that all your users are mainly using full PCs — is firmly all-in on in-browser web-apps, with their mobile experiences being second-class afterthoughts.
I imagine apps are better at being apps than a web browser, and due to their complexity webapps are also tightly coupled to a specific brand and version of browser, so you have less compatibility problems by bundling your preferred vm with your application.
I'd argue today that "apps" do pretty well whether they're in a browser or native. Native Slack is pretty good. Web Google Docs/Sheets is pretty good. VSCode is almost the same regardless of where you run it. But equally, Sketch and Ableton Live are also pretty good. Native Teams and Zoom are travesties of usability and stability.
Basically, I don't see consistent patterns of good or bad between web and native apps to justify your comment that "apps are better at being apps than a web browser".
Most GUI apps, native or otherwise, get associated with a brand. Especially the good ones. I'd buy Ableton swag, I love the brand that much.
The VAST bulk of web apps are NOT browser version specific. It's largely only bespoke, internal corporate apps that are tied to one version of a browser.
Javascript is not necessary to consume content. It wasn't necessary twenty years ago. It's necessary to deliver complex applications that happen to also run in the browser. We don't need another standard to usurp HTML+Javascript, we need to stop the practice of using technology that is meant for writing applications like Google Docs to display two paragraphs that were copied from the AP news feed.
I think it really depends on the audience. While certain use cases justify optimisation and SSR, for many sites it just doesn't matter that much. If most of the users are on desktop and have fiber/4G, why bother with all of that?
It looks to me like this design concept is meant as a browser for consuming the web of documents, i.e. pages meant primarily for reading. A news article or a blog post has very few legitimate reasons for using Javascript. (Perhaps some very advanced interactive charts/graphs/maps?)
You'd obviously want to use a different browser to run sophisticated webapps.
When you are "browsing the web", i.e. looking for information and following links to various places, how much of your time is spent interacting with complex webapps vs. reading text, looking at images, and following links?
I would say that if you are using a webapp, you are not really "browsing the web" anymore, but simply interacting with an application in much the same way as if it was a local one.
I think breaking the pillars of the modern web is kind of the point here. JavaScript is necessary for web applications and it can be enabled manually for those, but it shouldn't be needed for static text.
JavaScript frameworks like Next and Nuxt are great at generating static markup. One thing slowing them down a bit, and giving AMP a larger share of attention, is that not enough people are browsing with JavaScript disabled. Disabling JavaScript on the browser doesn't mean JavaScript is out of the picture. =)
Indeed, I don't think the people arguing against JS are arguing against all uses of it --- rather, it's become synonymous with "client-side scripting/code execution", and even if that was a totally different language, the point still stands.
"Websites should only be using JavaScript to enhance the overall user experience. If a web page fails to load properly with the absence of any scripts, then that site is poorly optimized and deserves to be ignored*."
As much as I would like for this to be true, it is unfortunately wrong. I routinely browse with the NoScript Security Suite add-on, and more than half of the sites I visit require JavaScript. In some cases the page will not even load. In many cases the page will load, but none of the links will function.
I wish that web design standards required non-JavaScript enabled browsers to function, but I just don't see it ever happening.
Are those "more than half" sites actually interactive webapps, or just "document" sites that you come across while searching for something? Because even this article says there's a place for the former; it's the latter which really shouldn't need any JS, and in my experience the vast majority of the ones I come across don't.
The fact that you see things that look like links, but don't actually function without JS, strongly suggests "webapp" --- I bet those are "links" that you can't copy nor open in a new tab either.
I use uMatrix with JavaScript disabled by default, and it’s well under one in ten sites that break significantly without JavaScript, though blank-page-with-everything-loaded-through-JavaScript is distressingly common in some sectors; the sort of thing that becomes a “Show HN” landing page, now that is probably closer to 50-50.
Note that I am speaking of content sites; apps I am excluding from this reckoning. Among things that could rightly be called apps, yeah, it would be more than half that require JavaScript, but not for documents.
The sad truth is web developers don't worry about web standards unless they affect the way they think their site should look.
I should correct that. Many web developers likely do their best to build to whatever standards/ best practices they can. Management, design specs, clients, etc... rarely if ever care about those standards and don't pay/ allocate time for conforming to them.
There are a fair number of web developers who are lazy too, but I suspect web developers care about best practices more than most other stakeholders.
It’s not that we don’t care, it’s just that there’s not enough time in the day to cater to all sorts of esoteric configurations.
Sure, you can make a non-JS fallback version of your interactive elements, but that often takes twice as long, and since 99.99% of web users have JavaScript enabled, it’s simply not worth the time to build separate versions for those 0.01% of users who don’t.
> Sure, you can make a non-JS fallback version of your interactive elements, but that often takes twice as long
This is largely just not true and the results are often no-entirely-legal. The majority of interactive controls can be supported by extending and styling bone stock html and using progressive enhancement. The only way it takes less time to roll your own solution for most of these components is when you don't bother implementing accessibility for your controls.
That isn't just a few esoteric users at this point, you are now brushing up against the ADA and have just exposed your site to potential legal action, lawsuits, and a big honking fine. (Not to mention the fact that you being an ass to a bunch of people who need that functionality.)
Nonsense. Anything that interacts with the server with AJAX and the like, or does client-side rendering will need a completely separate implementation to work without JavaScript.
As for the ADA, there’s nothing in there that demands your site works without JavaScript, it just has to be accessible to screen readers and the like, which all support JavaScript.
SPAs and things like React brought on this phenomenon. To their credit, they're doing a great job digging us back out of this hole.
I read a lot of web dev subreddits where many/most members are newer developers. Virtually all of the sites they post render nothing at all with JS disabled. I point this out and suggest some ways to fix it. A few have appreciated the tips but most just don't care at all. I'm not sure how to effectively reverse this trend.
Are you familiar with Opera Mini's(mobile) extreme data saving mode? It's quite remarkable at reducing web page sizes and has an option to render images at a preferred quality, or turn it off altogether. Also, some of the design of the original web page is preserved and it's somewhat usable.
But the issue with it is all the compression happens on their servers before reaching you, which is troublesome from a privacy standpoint. I wish there was a browser(or a selfhosted instance) similar to Opera Mini which can compress pages before it reaches the client
This seems like a fantastic concept for people in the developing world. Also for people who need to travel worldwide and may find themselves operating on "two tin cans and a string," as I once saw someone remark on a discussion thread.
Hell, Text-only mode sounds like the way I already consume a lot of news articles via my browser's reader mode.
I'm just going to say that if I see this big red box telling me
the browser blocked ads it's the last time I use that browser. I
want ad blocking so I don't have to look at bright-colored stuff
distracting me from the content.
Stopped reading and closed the tab at "JavaScript disabled by default. Websites should only be using JavaScript to enhance the overall user experience".
Good luck putting your thumb up in a try to eliminate the sun. Absolutely never gonna happen.
Either this project is someone's wishful thinking of what a perfect world looks like to them (in which case I'm not interested), or this is actually a serious attempt to enforce arbitrary rules on web sites, and the authors are trying to do it at the expense of the users, in which case I don't care either.
At a first glance, this looks good. However, users will be quickly annoyed when they can't interact with 95% of websites and the other 5% throws page weight errors.
I want the exact opposite. All the current efforts in amateur browser building are all focused on documents. Gemini, and HN's other perennial favorite, Gopher, all look several decades out of date. I want a clean slate platform that is focused on high performance, superb design, and great developer productivity with an emphasis on 2021 hardware. A clean slate redesign of HTML with WebAssembly GUI widgets as a first class citizen (instead of the current "draw-to-canvas" hack). Gemini and Gopher will never gain traction if they perpetually look anachronistic. Forget about stuff like the semantic web. Those are dead ideas that will never take off unless they are properly backed by a strong business model. When I look at Gemini and the various new browser experiments, I get the feeling that HN web engineers lost their way. I know this forum worships Alan Kay and Brett Victor, but if all the new web browser projects here focus on the "good ole days" where everything is a semantically linked document, then there is no more room for growth. Where's the emphasis on UI and UX? Users clearly want apps that run fast and look pretty. For some reason engineers here reject that proposition and prefers ugly looking documents in the name of simplicity. Compiler and language technologies have advanced leaps and bounds since the Netscape days. We have .NET, LLVM, GraalVM, Julia, and two dozen different high speed production ready WebAssembly interpreters. 16+ cores hardware is starting to become the norm. Yet everybody is obsessed with the concept of a boring Tim Berners-Lee era document. Dream bigger. Figure out how to get native level performance on mobile devices if you are building a "web" browser. Make creating responsive GUIs as easy as traditional absolute positioned Windows Forms. Design a browser such that making a Stripe-level gorgeous landing pages becomes trival. Integrate Figma as a first class GUI design tool. Make collaboration and CRDTs first class (like the Braid RFC, https://braid.news). Use neural models to speed up content delivery and assets loading. The wave of unimpressive "new" browsers that cropped on HN in the last year reminds me of the Node.js web frameworks hype during the first half of last decade where there was a micro web framework on Show HN every week claiming to be the One Framework to Rule Them All despite mostly being a thin shim on top of express.js/the default node.js APIs and offering no real value. Dream bigger.
> WebAssembly GUI widgets as a first class citizen (instead of the current "draw-to-canvas" hack).
Just a info. The 2d canvas of Chrome at least render to a texture layer only by appending commands to a display list that is only executed on the final stage (merged with all other commands) on the final stage in the gpu process.
So it is as fast as it can get.. for instance its basically the same for the Chrome painted by C++ and you actually have to use canvas primitives to paint widgets.
Once (if) WASM have access to something like this, it will be a canvas, the same one you have bindings for javascript.
So i would not call it a hack, as you basically have raw access to paint primitives that are very optimized.
This looks like a great idea. What I would really love to see is a no-tab browser with extension support. I have come to realize that a sufficiently advanced window manager is indistinguishable from a tabbed browser, but you will pry uBlock and 1PasswordX from my cold dead hands.
True. It will render the web and certain CAPTCHA systems entirely broken.
What we need instead is more control on what can be accessed with javascript. Things like screen and window size, locale etc. should all be spoofable and configurable by default. We also need a sophisticated firewall and control matrix right in the browser.
I can already do a similar thing for mobile apps on android. I spoof what permissions they can use and what data they actually get. That can include sophisticated things, such as stripping EXIF data from any image before an application gets access to it. Or a simple popup that asks whether an application can access the clipboard once.
I can't even fathom how most people live without such finegrained control. The only hope is a new browser that respects the user from the beginning.
Surf + hosts file-based ad blocking gets you there except for
1password.
As soon as you want this kind of extensions you probably need a browser based on Chromium or Firefox to have any shot at having an addon ecosystem at all.
You lost me at JavaScript disabled by default. Whilst a noble reach, this doesn't fly in the real world. I had a stint using W3M for browsing for a bit and well, not much flew.
Nice ideas, but the “No JavaScript by default” is probably a mistake.
Like it or not, most sites use JavaScript these days, and blocking it will often give you pages that are subtly and strangely broken, and if the end user is not a web expert, they might not realise they need to turn on JavaScript to fix it.
I’m not unsympathetic, I ran Firefox with the NoScript extension for years, but I eventually had to give it up, because I’d have to spend time figuring out what to unblock on most new sites I’d visit, just to make the them render properly.
Well I'm in. This looks great. And if I can't see your website because of java, that's fine. I have plenty of other options. This is a great tool in the box.
For experiments in the internet realm, people have the ability to start from scratch and don't deal with the shit pile like the web today. Don't like JavaScript? Design protocols and content formats that don't rely on that kind of stuff, like gopher gemini or whatever. (also should we even call this an experiment all these points appear on hackernews front page everyday..)
In my spare time I develop Haiku open source apps, my latest research/experiment/playground (as of 1st Jan 2021) is writing a new browser from scratch (that is not WebKit based), C++20 features, Actor model for multi-processing.
The FatHomeCat browser (https://github.com/fathomecat/fathomecat) had “90s mode” choice for sites like Google.com, not for the sake of disabling JS, but actually getting simpler & better version of Google.
I have another idea. What if browsers limited the amount of JS that can be loaded? Say, no more than 2KB of JS. Enough to do basic things like form submissions and AJAX, may be some basic canvas graphics.
That’s not only not possible currently, it’s fundamentally nonsensical. A browser needs ways of presenting things to the user; WebAssembly is just a small VM for executing code. You need the platform to provide some sort of UI library (e.g. HTML DOM + CSS). Beside the complexity of that lot, WebAssembly’s is just rounding error.
dillo (https://www.dillo.org/) might come close to that. AFAIR, it does not support Javascript at all, and has very limited support for CSS, so many pages look terrible in it, BUT it can have dozens of tabs open and still run on just about 100MB of RAM, plus it is really fast.
I only wish it could detect when asshole newspapers do that last minute mutation of the DOM to put bullshit divs in front of the article. It makes me play a button-click race to beat the rendering of the bullshit div to reader mode.
Btw-- why do asshole newspapers actually send the article content before adding the bullshit divs? If you're gonna be evil, just send the first paragraph over the wire and make it appear as if the rest of the article is just one email sign-up away.