Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Do we need browsers? (blog.justletit.be)
122 points by barnacs on July 20, 2015 | hide | past | favorite | 113 comments


Personally, I don't enjoy using web browsers much, either for "documents" or apps. There's a lot of good stuff out there made with hard work by talented people, but when it comes to my own personal computing, I would much rather use other kinds of interaction.

99% of my internet life consists of a few homogenous styles of interaction: browsing pseudo-hierarchical directories; communicating via text; purchasing; looking at pictures and movies; etc. In a hypothetical cyber-utopia or something, this could all just be a simple protocol (say, um, combining aspects of Direct Connect, BitTorrent, NNTP, Gopher, Bitcoin, GPG) that I could use with my own client, which would probably be an Emacs interface. Instead we live in a world where the web standards seem to encourage teams to spend their energy on reimplementing drop-down widgets.

It's wasteful and doesn't even lead to particularly good user interfaces. How many sites or web apps work well (actually well, not just semi-tolerably) with keyboard navigation? How much bandwidth and CPU goes to waste? How many sites work passably offline? How easy is it to automate tasks as a user? How easy is it to learn how to make your own sites?

I think "browsers" should be something very different from what they are today, and maybe something new will come along in the near future.


The browser is what it is because it's the camel designed by global committee. It's the place where competing interest groups fight one another to a standstill on a daily basis. Quite a lot of the stranger aspects aren't so much "features" as blast craters of past disasters.

For example, consider the five technologies for interactive, animated things in the browser: Java applets, ActiveX, Flash, Silverlight, and Javascript. Apart from Silverlight, which never really took off, they're all 20 years old. Only Javascript has survived the security wars; Java applets are dead, Flash gets a CVE bullet with its name on every few weeks, and ActiveX was killed long ago.

The near-death of NNTP is a story of spam cancels and cost allocation.

The browser covers all use cases and is available everywhere. That means it's necessarily horribly compromised compared to native solutions, but is an absolutely killer advantage for adoption. Incrementalism nearly always wins.

Furthermore, the unwillingness of people to pay directly for software leaves us with a continual problem of exploitative software. Everything from flashlight apps that steal your contact list to ads that steal your battery to connection-sharing apps that open you to liability for the actions of others. For the moment, we keep other people's software securely nailed shut in the browser.


> Quite a lot of the stranger aspects aren't so much "features" as blast craters of past disasters.

Cracked me up, very well said. In general I agree that the document-centric browser has been abused horribly. Then we get obscene beasts like node.js, because, well, the browser does it, so must we.


Node.js is interesting, and while it wouldn't be my first choice I can see how it came about. AJAX in the browser leading to Javascript developers evolving a callback-chaining style; realising that they could use the same language and style on the server; the inherent high efficiency of single-process nonblocking callback orientated servers.


Does anyone think that JavaScript could overtake PHP as the "go-to" language for server-side scripting with Node.js as the "go-to" runtime?

Despite my indifference to Node.js, I can see the unification of client-side and server-side scripting into a single language being incredibly valuable to the millions of PHP and JavaScript developers (both front and back-end) out there: most of whom beyond the realm of HackerNews and want to get it done.

Not only that, it would significantly reduce the barrier to entry for web scripting and disincentivize beginners from picking up all those nasty habits such as md5 password hashing and manually constructing SQL queries from raw input which are so worryingly accessible.

The only thing I can see preventing this are:

1. Lack of a mod_node for Apache. Perhaps this exists? 2. The upload and refresh workflow so many PHP developers are familiar with 3. A scheduler to prevent any one website from blocking the Node.js event loop on shared hosting

Is this completely bonkers, or is it quite sane?


Not quite correct--as memory serves, as far back as Netscape folks have offered server-side JS.


Just want to add Unity WebPlayer[1] to the list of interactive technologies.

It is not as popular or generic as the five you mentioned but I do see it pop up a lot if you want to do some casual web gaming, which I don't like, because it is just another plugin you need to use the "open" web.

[1]: https://unity3d.com/webplayer


Unity's been working on bettering its HTML5/ASMjs exporting, so hopefully the need for a plugin goes away someday


> In a hypothetical cyber-utopia or something, this could all just be a simple protocol (say, um, combining aspects of Direct Connect, BitTorrent, NNTP, Gopher, Bitcoin, GPG)

The article does mention p2p and decentralization, but the browser isn't whats holding that back. The client-server development paradigm is.

Currently, every dev making a p2p app has to roll their own platform because there's no standard (i.e. no LAMP, rails, or Heroku for deploying p2p apps). Some projects are working to change this, namely BitTorrent Maelstrom (which is a fork of Chromium with native support for magnet URLs, so it auto-renders an html/js package distributed via torrent); and http://ipfs.io (a sort of content-addressable p2p filesystem, or bittorrent on steroids).


Yeah, that's super interesting stuff... Also Etherium, which as I understand it is a way to do distributed algorithms in a very general way.


Or, in short, you want everything to run in an emacs interface. The world has moved on. We have mice now. Sorry. Google implemented keyboard navigation in their search results and apparently the uptake has not been great.

CPUs are there to be "wasted". You pay for that clock frequency to use it and make your life better. Most people don't want to do everything in a terminal. It's depressing.


We had mice for a brief instant of computer history, then came the phones and tablets and e-readers and watches and TVs and voice interfaces and who knows what's next?

I worked on a web-based replacement for medical record software, and a primary complaint from users was that they wanted to be able to use the keyboard. These weren't Unix nerds or whatever, they were normal people who wanted to get stuff done effectively and ergonomically.

Nobody knows you can do keyboard navigation on Google, and anyhow that's just one website with a special custom JavaScript. Why keyboard navigation on the web is so horrible and embarrassing is an interesting question with lots of strands to investigate.

Why would CPUs be there to be wasted? Power efficiency is even an ethical imperative these days. If there are more efficient ways to do the basic tasks of computing and networking, that also allows CPUs to do more interesting and valuable things.

It's not about Emacs, that's just my personal idiosyncratic preference. Other people have other preferences, which is exactly the point. Note that Emacs works very well with many network standards, like email, Usenet, IRC, RSS, even BitTorrent. These are protocols that have enough semantic structure, openness, and simplicity to allow access with different clients that present the information in whatever way is optimal for the user.

As for most people not wanting to use terminals, how do you know? Maybe they just don't like the existing terminal interfaces. Unix commands are notoriously cryptic and shells don't help much. But if you showed a normal person a nice-looking shell where you could type "netflix breaking bad s3" and get a nice presentation back, would they really be depressed?


As the author notes, Emacs was an example.

The browser interface is terrible. We use bug tracking software. JIRA. It's horrible. Click here, click there, this screen closes, that one opens, this thing is over here. Ugh. It is just a click fest. For this sort of thing I want an MDI interface. I read a report - it sounds like one a duplicate. I want to search for it. I want to go to a search window, do the search, and compare them side by side. Or I want to look at the backlog and the current sprint.

Sure, to an extent I can open multiple browser windows or tabs, etc. This is another example. - one windows for everything you are trying to do is often cumbersome and terrible. You are forcing UI design on me, the user, every. frigging. time. I. use. your. app. I don't want to open 5 tabs to the same site, maneuver them around, deal with "going back requires page resubmission" warnings, let alone deal with sites that don't understand my 5 tabs are all one session in my mind. Don't make the user solve that problem every day they use your product.


Lots of people including myself make heavy use of the mouse in Emacs. And pretty much the only time I use a terminal these years is to recover from having inadvertently messed up my .emacs file.

In fact, in at least one way, Emacs has better mouse support than either Firefox or Chrome do, at least on OS X: in Firefox or Chrome, even when the browser's window is right up against the right edge of the screen, the rightmost column of pixels does not belong to the browser's scroll bar, which makes it hard to interact with the scroll bar, particularly since OS X 10.7 or 10.8 when scroll bars got thinner. In contrast, in a properly-designed OS X app, like TextEdit or TextMate, whenever the right edge of the app's window is up against the right edge of the screen, the user can just jam the mouse cursor to the right and start interacting with the scroll bar without the need to get the cursor into a thin target region. Now most Macs these days come with trackpads, and the problem I just described doesn't apply to users who scroll by using (two fingers on) the trackpad, but if the Mac is a desktop Mac (i.e., not a MacBook) and the user prefers a mouse to an external trackpad, this is an important issue.

Emacs, particularly the Mitsuharu version of Emacs (which everyone who uses Emacs with a pointing device on OS X should be using because the FSF version often has bugs that affect Emacs's integration with OS X's graphical subsystems) gets this "rightmost column of pixels" issue right. (Sublime Text and Atom are 2 more apps that do not get it right.)

So in summary, although Emacs looks like an ncurses- or terminal-interface app and not a native GUI-based app and although many Emacs users are vocal pointing-device skeptics, not all Emacs users are mouse-haters, and Emacs actually has very good support for pointing devices.


Google hijacking the keyboard to only be able to navigate through their search results instead of letting the user scroll the page with keyboard as usual is indeed pretty annoying imho..


That's the entire point of semantic web


Regardless of what he's said its nice to read a webpage that for once didn't take 5 seconds to load, didn't clobber my screen with unnessecary junk or have jerky scrolling.


I would have appreciated a sensible max-width on my fullscreen desktop browser.


I wouldn't. I like the page wide. I like that by not setting a max-width, I can have my browser at whatever width works for me. This is how the web was supposed to work.


Agreed... well-circulated research demonstrates maximum comfortable line widths for various mediums.

It's frustrating when someone decides "damn the system" in the name of a proto pre-design-standards aesthetic instead of just making sure standard zooms leave their content at 12-14 words per line.


On the other side, You can have your own stylesheet, without even a single !important ;)


Safari's Reader View for the rescue ;)


Which kind of defeats the purpose of the original article.


I think it does the opposite, The author wanted to send a structured document without presentation, with your tool of liking (I used Firefox's reader view) you can read comfortably without compromising the content of the article or it's features.

edit: I could have used curl, an html processor and a pager like less or more


I'm pretty sure the author is against using special tools to read static text. That's his whole point, that people don't need browsers/readers or any fancy app to read static text!


You have one. It's called "the size of the browser window". If you don't want a site to be the size of your monitor, don't use a full screen window.


This is how I use the browser - 1920px is way too wide for reading, and I like my browser windows in portrait mode for more vertical space anyway. Unfortunately, yet another idiotic "feature" of javascript (and CSS media queries) allow the webpage access to the actual monitor resolution. Some particularly idiotic sites use this, causing utterly unreadable pages (i.e. all "sidebar" and margin) if you have your your window at something like 900px wide instead of the 1920px they are detecting.

This is mostly the websites fault, of course. It's the same forced-layout mentality that has always caused problems on the web.


as a remedy I have one of my monitors rotated 90° into portrait orientation. I can just drag the offending tab across and have a screen width of 1200px. There are a (very) few sites though that seem to cache the original screen width and need a hard refresh.


Why not resize the window to your preferred width? max-width would just waste most of the screen with whitespace, while a smaller window will let you put it where you want and work with other things in the rest of the space.

I have a pair of 27" monitors and almost never maximise windows - the exception is when working with very large images that need the full resolution.


I almost always maximize windows unless I'm doing something where I need to mentally integrate multiple applications. I'm certainly not going to fuck around with popping tabs out into individual windows and resizing them when casually consuming content. Either it's legible or I move on.


I dislike max-widths. It wastes screen real-estate.


So, books, magazines, and virtually every form of print should have margins, but the web should have the text go right up to the edge of the browser?

I disagree that a sensible max-width is wasted screen real-estate. Even after resizing my browser I'd prefer my content to have a little bit of space to breathe.


I just realized that the comment section of hackernews uses ~80% of the window width, with hardly any padding for top level comments.


100% of the window is wasted if the lines are too long to read.


The user opens a URL that downloads an 'app' and all it's dependencies. So, to start with, you'd have dozens of UI frameworks, network libraries, parsers, etc on your system. Some of them would be great, others would be terrible. Slowly, as the best ones bubble up to the top and become 'standards', each of dependencies would disappear. Developers would settle on one of a few different engines and frameworks. Apps would be the content, a few scripts to drive the engine, and a manifest to tell the user's computer which standard engine to download.

Which would essentially be the modern web as it is now.


As soon as the author started explaining problems about browsers I started thinking about solutions. And I realised that "we" already tried to solve this problem with plugins. Flash, Java applets, Silverlight, etc. were all envisioned as solutions to these problems. And it doesn't take too much brain to realise they all failed (at least in the popularity contest).

Restful nature of the Web is giving good structure on which to build and time shows that evolution, instead of forcing one good solution, is much better way for technology adoption when it comes to the masses.


What you describe would be some kind of a natural selection among lower level abstractions. Maybe they would indeed converge into a "few standard engines", maybe not. Either way, they wouldn't force your hands: you could always use whatever UI framework or network library you want and still distribute your app as a simple URI.

In contrast, "the modern web as it is now" gives you a single built-in layout engine and not even a chance to implement network libraries. And that's exactly my point: For a document viewer, that's fine. For an application distribution platform, not so much.


Sometimes I dream of an alternate future where SUN weren't asshats and Java won and we're all running applications that come with maven POMs. Then I remember how badly the average web developer can bungle simple static HTML and realise how happy I am I don't have to debug spaghetti XML.


yes, sounds like hell


The author meant "app" as in a real application on your computer not a "webapp" like from the Chrome Web Store.

Interestingly, I'd say it's closest to how the latest Android does it. E.g., click on a wikipedia link and it opens the page in the wikipedia app.

I'm not really following your fragmentation/consolidation line of argument or how it relates to the modern web...


modern web is nothing like that.


hyperlinks are successful. if "native apps" had a standard way to handle links between apps then we wouldn't need browsers. if "native apps" would run automatically in a secure and sandboxed environment without requiring any installation then we wouldn't need native apps. We can argue about webtechs but there is no alternative to browsers and the web.

URLs / Security / Platform independence

Browsers based on standards offer all of these already. The problem is people trying to make products the web was not designed to run. That's why flash and co became so popular at one point and that's why browsers vendors are now trying to come up with something like web-assembly. I say sometimes it makes much more sense to write a native app. One isn't going to run a video-editing app like final cut on the web(especially since the file system api has been dropped, since Mozilla wasn't interested in it).


The thing is, people create software to advance a purpose/goal or (perhaps more often) to make money. The web is the most accessible/available platform for either. That is why so many non-web-like technologies/paradigms end up on the web (flash, SPAs, huge client-side frameworks).

The author is saying, let's take the accessibility/entrepreneurial-nature of the web and bake it right into the user's desktop environment rather than placing it on top of a platform (the browser) that has to rewrite much of what's already there in the OS.


As for hyperlinks between apps, you mean something like http://applinks.org/


Off on a tangent here. He mentions he wants a tool which takes uris and figures out the correct program to handle it. This sounds a lot like plumber [1] from Plan 9. Plumber has a powerful, configurable method of finding the right tool to handle a string, and other programs can make it aware of the context the string should be interpreted in.

[1] http://doc.cat-v.org/plan_9/4th_edition/papers/plumb


Sounds to me like things already work on mobile OS-es e.g. on android tapping a youtube linke can take you to the app to view instead of in the browser. Perhaps you could have the same kind of function on desktop OS-es shouldn't be too hard , all the apps just need to understand URLs. Could even go further than a lookup table for certain domains, and have different protocols etc.


Taking Android as a example here. The way this works is by basically letting apps declare that they are interested in certain URLs, and then present the URL-clicking user a list of applications that are interested to choose from. This quickly becomes annoying and checking the "Remember" button is something you quickly regret when you change your preference. This simply will not be practical on a desktop where you might use a hundred different programs. There is also the inherent limitation that you only send the URL to one program.

A better approach is having good defaults, and letting the user add associations as he needs. That way the control is returned to the user, and he will feel confident updating the list of associations. Bonus points for multicast and not being restricted to one string format.


Upcoming Android has deep links/defaults.


How do you deal with getting the execution environment, and the software itself, onto the client machine? With HTML+CSS+JS, it's automatic because the assets are loaded with the website and the browser itself is the execution environment. That's the biggest draw to making "web applications", there's no difficulty in delivering and executing the software; e.g. if I want to write in Java instead, the user needs to go and download a JRE.

I would love to be able to eschew the web tech altogether, since we can all agree that HTML/CSS/JS is a terrible solution for building rich applications. But how else do I deliver my software without making users jump through download + install + update hoops?


That's really only the situation for Windows users. Unix derivatives usually have a package management system, where you can get your package included — dependencies taken care of and all.

But yes, I agree. Many people use windows, and the download-install-update circus seems to be driving people to develop web applications.


Which means also that Unix as a whole has many dozens of package management systems... So you package your app as deb, rpm, yum, pacman, ports, brew, nix, and plain tarball and then hopefully most Unix users can get it to work. That's a maintainance nightmare in itself...


Javascript package managers like npm tend to work much more seamlessly than the old unix ones. Wouldnt mind scrapping them all for https://node-os.com/

Also, the reason package managers suck is because.. entropy. See Joe Armstrong, The Mess We're In http://www.youtube.com/watch?v=lKXe3HUG2l4


I find that the Nix package manager is much more principled, general, and promising as a model for robust and easy package managing. And with NixOS, it's a huge step towards a declarative and reversible way of managing a whole computer system.


Doesn't Windows already have that? Go to ''Control Panel'', ''Default Programs'', ''Associate a file or protocol with a specific program''. For example FTP, HTTP, HTTPS get associated with Firefox, MAILTO with Thunderbird, URL:PowerPoint with PowerPoint etc.


This fades in comparison to the customisability and powerfulness of Plumber.


We do not need browsers, but for different reasons than the author uses.

Browsers are just one of many tools to consume one of many types of data on the internet. The fact that we'd fallen into this browser-web-address-as-a-location-app-content-mining paradigm initially does not mean that it is the optimum one going forward.

To see how misaligned we've gotten, play a thought-game: if I were a blind, privacy-sensitive person, how would I consume content on the web? I wouldn't want or need ads, user-tracking, or any of the rest of it. All I'd want is somebody to read me some text for 5-10 minutes, maybe a few times a day. Each time I might have something to reply -- or not. This would provide me everything the current web does, and would be much more lightweight and flexible. 99% of the bytes we're pushing and the interactivity we experience from browsers has nothing at all to do with long-term value. It's much more focused on stickiness and engagement. One might use the word addiction. The core text data itself, while somewhat useful, is not very much at all compared to the rest of it.

Browsers are not built in users' best interest. Therefore I predict they'll be around for a very long time.


> I wouldn't want or need ads, user-tracking, or any of the rest of it.

Does anyone, blind or not, want or need ads or user tracking? (I suppose you could argue that any login constitutes a (hopefully) benign form of user tracking; but then I could say that blind people will need such logins too!)


Facebook is arguably a sophisticated real-time communications and social networking app. Imagine if Facebook launched a desktop app to do that. Who would use it? Would they ever have become as successful as they are today? My reading is the author seems to think something like Facebook is entirely unsuitable for the web and contrary to what it was designed for, but I think it excels in a browser because of, not in sprite of, the strengths of the web: cross-platform, URLs, connectivity, low barrier of entry, and so on.

Someone will probably bring up the Facebook HTML5 mobile app thing, but consider on desktop it was always good enough, and in fact they still have a mobile web version of Facebook, which as far as I am aware is also pretty good. I think the main problem Facebook faced with their mobile app was the immaturity of web view controls on mobile, which have since come on by leaps and bounds (WKWebView on iOS 8+, Chromium web view on Android 4.4+).


The mobile version of Facebook is crap. At least on android. Locks up and jams the machine running it, the one running inside a browser is much better. This on a performant NVidia Shield tablet, so hardware is not an issue.


Lots of people used Napster, DC++, email, Usenet, Kazaa, IRC, ICQ, MSN, AIM, RSS, SMS, all kinds of stuff. There are lots of possibilities. Facebook's success is interesting but it's totally possible that something similar could have happened with another protocol than HTML over HTTP, no? Either way, Facebook is basically a commercial success involving network effects, vendor lock-in, good marketing, timing, etc; it's not a technical breakthrough at all.


More recent example is Whatsapp, it is app only and is wildly successful


I've been thinking the same thing for years. There were a few attempts at parts of the problem like java, XUL, or XForms, etc but they didn't break out.

Might be time for a rethink of the whole thing. An open, remote updatable, sandboxed, cross-platform, app platform, and leave the docs to the web browser.


  open, 
  remote updatable, 
  sandboxed, 
  cross-platform, 
  app platform
The problem isn't that we haven't faithfully accomplished all of those ideals set forth. The problem is that there were much too few, sophisticated enough to appreciate what was placed before them.

And amongst the miniscule audience that did understand what lay in their hands, half chose to abuse the unwashed masses that didn't tend to the honor system that stood in place of proper techincal security practices at the time.

Were such things simply way too far ahead of their time, lost on a market too immature for such luxuries to be made generally available?

Will there ever be a time when people care enough about anything other than instant messaging, to invest hours learning the intricacies of how to make a VCR stop flashing 12:00 AM?


WebAssembly is the latest hope.


WebAssembly is only going to make the situation worse.


No we don't, network protocols are what matters.

I like to have my email client, my native RSS reader, still jump to newsgroups occasionally, use my desktop chatting applications...


I often have mental flights of fancy when I wonder what the world would have been like if end-to-end had been preserved and IPv4 NAT hadn't arisen to put a stranglehold on protocol development. Twitter, Facebook, eBay, et. al. should be protocols, not freestanding businesses.

It's much harder to "monetize" a protocol. That's a feature, not a bug.


The problem in my opinion is that the browser is addressing too many layers of abstraction at once. This makes it very difficult to get the specs right, security becomes immensely difficult to get right, and it becomes almost impossible to implement a browser (bad for competition).


Sadly there's no way to get to there from here. Something like Java Web Start was and remains a much nicer application platform than a web browser. But every device has a web browser, and manufacturers fall over themselves to add support for the latest "web" functionality.

And ultimately it doesn't matter. Many of the layers of the computing stack are over- or under-engineered for the task they end up performing (have you seen the x86 ISA? The ELF spec?). But computers are very good at abstractions, so ultimately none of that matters. Running our applications on the web costs us a bit of complexity, a bit of performance, but we'll reach the point where all the mess is hidden the ordinary day-to-day programmer.


The article mentions that "web browsers have become resource hungry beasts with millions of lines of code," suggesting that much of this cruft has been the result of having to support backwards-compatible standards for the web.

I'd love to see a project similar in spirit to Servo that instead of aiming to refactor the language browser engines are build with refactors the functionality they provide. Something that identifies the Majority Use Case™ and tries throwing out the rest.

I'm not saying that we should push for deprecation of certain functionality, but I think it'd be interesting if people would start using this browser for the promise of faster, snappier surfing.


I've been thinking about something like this for a while - a standard for an easily optimised subset of html technologies. To conform to this standard, pages would be restricted in the ways they can manipulate the DOM, have a simpler DOM, use only a small fraction of CSS properties, and not use some JS features e.g. eval, delete.

We can use the asm.js model for opting in. Browsers that support it run the pages super fast, other browsers run them just as fast as usual.

A browser engine supporting just this standard would be considerably smaller, more embeddable, and a nicer base for current webkit based apps (e.g. spotify, steam, or atom). It might also help apps that want to use something like webviews for embedding content but need to be careful with memory / performance.


From what I've heard Microsoft Edge looks like such attempt. When a site requires some kind of compatibility mode, the not-so-good-but-indeed-old IE is spawned to serve it. Great approach and I hope they set the bar high to make the common use case faster.

My ideal browser would support only something like "use strong" from [1] and spawn Netscape 4.0 for all pages that abuse JS.

[1] https://developers.google.com/v8/experiments


I think OP is missing the point. Browsers of today are taking over the space which Java tried to capture (remember "write once run everywhere" slogan?) but failed. They provide (mostly) unified development platform. But the catch is that unification comes as a direct result of the fact that they are meant to be something else. Any other unified platform faces an uphill battle while browsers are... just there. Are they perfect as a development platform? Hell no. But in absence of every other option, well... we take what we can get.


Do we need Swiss Army knives? For every tool that's included in the knife, you can get a much better stand-alone version that performs the same task much more efficiently.

So, what possible benefit could you get from having so many poor tools in a single place, that you can't improve by carrying around the equivalent set of separate high-quality tools? The Swiss knife ought to be such a terrible idea and nobody would ever use it, right?


The Swiss Army knives are of course known to be simple, beautiful, time- and battle-tested, coherent, reliable, etc. There may be better multi-purpose tools, but the ideal of the Swiss Army knife sets a high bar. Compare that to the browser... You can't bring a browser with you to the woods unless you have really good 3G. No single human can even understand everything a browser does. Browsers are huge, unwieldy, and change constantly. Army knives, like Zippo lighters, pride themselves on having near-Platonic designs that haven't changed in a hundred years. An intelligent extraterrestrial could grok them. A more appropriate metaphor for the browser is a tarpit, as in "Turing tarpit."


I'd like to know how the first Swiss knives looked like...

The solution would be to wait until our understanding of the technologies for presentation and software distribution over networks become stable so that they don't change as much, and then to build the smallest, leanest browser we're capable of.

This doesn't mean that we don't have a need for browsers, as the article suggests; merely that we could use some better implementations.


> I'd like to know how the first Swiss knives looked like...

Taken from wikipedia:

> During the late 1880s, the Swiss Army decided to purchase a new folding pocket knife for their soldiers. This knife was to be suitable for use by the army in opening canned food and disassembling the Swiss service rifle, the Schmidt–Rubin, which required a screwdriver for assembly. In January 1891, the knife received the official designation Modell 1890. The knife had a blade, reamer, can-opener, screwdriver, and grips made out of dark oak wood that was later partly replaced with ebony wood.

Photo: https://en.wikipedia.org/wiki/Swiss_Army_knife#/media/File:W...


I envision a switchable client, specified by a URI in the response headers. The client would be a platform independent bytecode of some sort, like the NaCl.

The only thing the browser would supply would be the chrome, networking, sandboxing, and a canvas for the client to draw to.

The current web runtimes could be refactored into one of these clients. If you don't like the way CSS works, or if you think JS is weird, just write another client.


Well, we have a scenario rather close to the one described in the article on mobile: Browsers for text content and apps for specialized stuff, that can incorporate web views, if needed, and access the net. (Yes, I’ve read the second-to-last sentence.) In a nutshell, it sucks.

To start with, I don’t buy the premise “Okay, so web browsers are awful for applications.” The statements before are way to generic to prove anything.

“[...]resource hungry beasts with millions of lines of code” falsely connects those two properties.

“[...]use several gigabytes of RAM, even when just displaying document-like content” might also be rooted in advertisers packing megabytes of rubbish in an iframe or web devs loading tons of unneeded web fonts. So, that’s bad engineering on the server side, not the browser’s.

“[...]that reimplements much of the features of an operating system on top of a real operating system” Chromebook anyone? Yes, that’s actual, ready-to-be-bought devices out there right now, that do exactly this. And lo, the problems are somewhat contained.

The conclusion also does not show any solution to the non-problem discussed above. “Imagine something like xdg-open.” I don’t need to imagine that, I have it right before me available in the terminal. And packing another service discovery on top of the stack is, to come back to my opening words, not so different from the closed-world app stores. Even Ubuntu has such a thing. And guess what? For people without technical knowledge keeping everything in the browser is way more efficient (work-wise, not performance-wise) than explaining arbitrary switches in context from browser to some app to some other app and back to the browser.

Security: “I’m no expert [...but...] doesn’t seem to be completely unrealistic.” The devil’s in the detail, as virtually everyone who works on browsers’s JS engines can tell you. A runtime, that downloads arbitrary binaries from the web to be executed, sounds in every regard like a bad idea, even if you put it in a full virtual machine. The two-word argument against this is basically “Flash exploit”.

Platform independence: The author might be too young to remember Java’s “write once, run everywhere” claim, that turned out to be not so fully true. And turning the current state of almost full platform independence in the browser for some proposed, from-scratch infrastructure will become exactly that disaster, that Joel Spolsky warned about 15 years ago in the context of the Netscape rewrite (http://www.joelonsoftware.com/articles/fog0000000069.html).

“But one thing is certain: the web platform we have today is already bloated, does not suit our needs and severely limits innovation.” No. It is not certain. Browsers today run on low-profile smartphones. Bloated web platform? Most of these things are opt-in, and many clever people build fallback strategies in new specifications to enable _everyone_ to become part of the web. Limiting innovation? Quake runs smoothly in the browser. Who would have figured that 10 years ago?

All in all, to me it seems the post is written by someone, who hasn’t yet fully groked the web.


I'm not sure that "a decade-old game runs smoothly" would qualify as an innovation, rather than as a showcase of the exact problem the author was hinting at. Just getting stuff shoehorned into the browser that we already had running perfectly fine outside of it is not "innovation".

What about truly new stuff that nobody has seen before, neither inside browsers nor in native applications?


In 5 years, that’s my rough prediction. So long it will take for the browsers (with significant market share) to catch up with existing native features to combine them with the web. Then we have enough instances out there to use reliably:

- canvas + 3D support

- asm.js / WebAssembly: a way to run performant byte code

- WebRTC: Video chat, file transfer P2P

- APIs closer to the hardware like vibration, ambience, speech, ...

- stuff, that’s being developed but I don’t know yet, because the field has become so huge now. Small glimpse: http://caniuse.com/

And for developers it’s a single platform, together with distribution channel.

Apart from that: Truly innovative stuff happens on the web regularly. For example, look no further than Facebook (or Reddit, Imgur, Twitter, whatever you like). A software (in the broader sense), that allows billions of people to connect with each other and share thoughts. Imagine that in the age of SMS, phone books or snail-mail! You will see, that it’s fundamental to make something like social networks, the browsers had first to evolve from the bunch of hacks, that they were in the 90’s.

Another example of the power of browsers: FirefoxOS. A complete smartphone OS powered by web technologies on a thin Linux layer.

So my point is: the “truly new stuff” is partly already out there, you just have to look. And partly it will hit your devices, when the browsers are evolved enough. It’s a continuous process, and not a single “wait, there’s more...” (which doesn’t surprise the least, when you think of the _huge_ numbers of devices out there).

Edit: Formatting.


Surfing on a smart phone is a real pain. Pages take longer to load than in the 90s and contrary to the 90s you can't start reading before it all has loaded.

Java tried to be C++, but run on every machine. That turned out to be difficult. But I don't think the author is thinking of Java. I guess he more has in mind domain specific languages, which are abstract enough in nature to be executed faithfully on any system with given capabilities.

Security also goes hand-in-hand with this form of abstraction. If the language can only express safe actions, the program will not be malicious. In pure languages, such as Haskell, one can use type-guarantees to enforce these restraints. One could imagine a virtual machine with this kind of typing.


> If the language can only express safe actions, the program will not be malicious.

Because 'safe' is not well defined, I can't argue rigorously against this, but it seems like the sort of thing that falls afoul of Rice's theorem (https://en.wikipedia.org/wiki/Rice%27s_theorem): for most reasonable definitions of 'safe', you can have a proveably safe language or you can have a Turing-complete language, but not both.


Rice theorem is not really applicative here. It is a beautiful theorem, but it has to do with what we can compute, while "safe" has to do with what one can access.

A function taking Integers to Integers in a pure language cannot do I/O, and thus is "safe" to run, in the sense that I can be guaranteed it does not contain a trojan making my computer into a peer on a botnet. This is true, even if I allow it to compute any computable function.

Computational expressiveness and "safeness" are in a sense orthogonal. And just as I don't think it is always appropriate for any function to do I/O, I am not convinced all functions should be able to perform any computation. But that's a different discussion.

Regarding definedness of the term "safe", I would say it is defined by your threat model. It not an absolute term, but dependent on context.


Of course, if the language of a program is stripped down, the interpreter can be made much safer. But then, we gain nothing from trying to shove down the users’ throats other platforms compared to, say, WebAssembly, which _will_ be shipped in almost all browsers in a year or so.

Just look at the long history of people trying to bring Python in the browser, or the fight to get Java applets _out of the browser_ again.

Basically I read the article as “scrap that web thing, and just begin from scratch”. And this is not a worthwhile path to go for many reasons.


> A runtime, that downloads arbitrary binaries from the web to be executed, sounds in every regard like a bad idea, even if you put it in a full virtual machine. The two-word argument against this is basically “Flash exploit”.

Not sarcasm, but an honest question: barring the argument "even full virtual machines have bugs", to which one might as well retort "even multiply heavily tested browsers have bugs", why isn't it safe to run such a program in a virtual machine? It seems that most of the pain of Flash exploits comes from the fact that Flash doesn't run in a (proper) sandbox.

(I'm not a web developer, so I could easily be talking nonsense.)


I would have written something similar, had I not been so lazy. Good Job, well put


1) Almost every application uses technology that's build on older technology that's build on older technology. That's not necessarily a bad thing.

2) Nobody says browsers want to replace all applications. It's a false condition the whole article is based on.

And why is it anonymous? Does he/she know it's nonsense?


> 2) Nobody says browsers want to replace all applications. It's a false condition the whole article is based on.

Many people have been saying exactly this. E.g. http://blog.codinghorror.com/all-programming-is-web-programm...


The web browser is not the shell[1] or window manager, and it never will be. It may render GUI widgets just fine, and you could use it as a shell, but only because you can technically use any program[2] as a shell.

I know developing native client applications is not popular recently; there good reasons for that, such as wanting to develop your software for as wide an audience as possible. What you have to remember, when choosing your development environment, is that there are always limitations to every platform. On the web that means you are always going to be sandboxed not only in what the application can do, but also in how it can interact with the user. Given that we always have to care about phishing, CSRF/clickjacking, and numerous other types of malware, applications developed for the web will never[3] be able to do many of the things we expect from a native application.

This doesn't mean you can't do good things on the web (we could list numerous examples of Great Tools that are available on the web); it's just isn't going to ever have all of the features you get with a "real" native app. Even if you try really hard to get away from "document"-style nature of the web, the sandbox and realities of making things safe for the user will always be a problem. Yes, we can try to work around that by rebuilding another OS -inside- the browser. Some people are certainty trying. Instead of throwing your sanity away on that never-ending pile of problems endlessly-expanding complexity, I suggest simply realizing that while some things just aren't going to be practical inside the browser, for other problems it's still a decent platform to develop for, and it is slowly getting better.

Oh, and the Mac/OSX people will be angry you try to get them to force them to use too many non-native GUIs.

/* I'm skipping the discussion of the "software as a service" scam... I'll assume, for the moment, that the desire to make the web into an GUI shell is not simply part of a scam to try to convert one-time sales into a recurring service fee. */

[1] https://en.wikipedia.org/wiki/Shell_%28computing%29

[2] I once saw /usr/bin/gopher stuffed into /etc/passwd as the shell

[3] at least I hope it's "never" - making a platform where I can impersonate too much of your native GUI over the network just asking to be attacked


Thanks, someone who says why web is going too far from its original purpose.

The major problem with web is : We use framework to abstract differences between browser which abstract web for different Operating system, which abstract hardwares.


I keep repeating it, even though I do have lots of experience in web development, but I also do have even more experience developing native applications and came to realize the same thing.

Nowadays, when given a choice I always pick native projects over web ones.

But these type of statements usually earn downvotes in HN.


According to my company firewall, this site has been blocked for malware ... ?


Proving (part of) the point of the article ;)


This is why URLs exist. They have a protocol.

You use the protocol to match the application.


I have long had in the back of my head the admonitory slogan "URL's and URI's aren't the same thing", but had never bothered to learn why. Your last sentence finally clarified it for me; thanks!


What the "browser" is trying to solve: Reading word document, Reading PDF, Playing Flash, Reading news-feed, Database search.

Instead of having separate programs for each document, we should only need, the "browser" to open them. And the documents should be able to link to each others. And served using an open standard.

It would also be great if you could open general purpose app's in the "browser" so you don't have to install them.


My question is which shift do you follow then, the app bubble is there and it's about to burst. Web is trying to build once deploy everywhere. It may not be ideal, but xamarin and other technologies don't offer an easier solution.

I do agree that browsers should be reviewed and revised and trimmed down to the essentials with added functionality through encapsulated plugins or so, the browser tries to solve too much imho


I can't agree with article.

Most people want fast and easy to use applications, you just need to type address on any device with the internet and you're done! No installations, no updates, no dependencies, no worries.

Imagine some app which you don't know how it looks, you must install something like 20MB, then you find out it's crappy and now you must uninstall it. On the web browser you just close tab and voila.


Web app still downloads the same 20MB of crap and leaves it somewhere in the cache.


Web assembly. Android deep linking.

Too bad he hates web browsers so much.

But I agree on a few ideas. It would be nice to have fast document-only thing separate from apps.

It would be nice if browsers had finer-grained security like Android.

It might be cool if we could take apart the browser monolith and have something more like components somehow.


> nice to have fast document-only thing separate from apps

The problem is economical, not technical. Content producers want stats (who is reading what for how long?). They want flexibility (images, fonts, math formulas, typography, videos, interactive 3D diagrams). They want income (Ads suck the least apparently). They want wide reach (desktop, mobile, ebook reader, billboards). They want interaction (comments, notes, sync). Todays browsers provide all that, but a document-only thing would restrict them. Find a way to improve upon the browser (most prominently on income) and investors will throw money at you to build it.


> It might be cool if we could take apart the browser monolith and have something more like components somehow.

On the other hand, imagine the developer test-coverage nightmares that would produce!


Ugh, not one of "these" articles again. Sorry to be blunt, but the web has evolved beyond "just deliver some static text". Get used to it already, it's been decades!


Ugh, not one of "these" comments again. Sorry to be blunt, but people are gonna keep thinking the modern web sucks. Get used to it already, it's been decades!


>>> people are gonna keep thinking the modern web sucks

Yeah, people who are too old to learn new things and/or hipsters. Use curl to browse the web as far as I'm concerned.

Meanwhile, Google search does instant results, Facebook comments don't trigger a page refresh and Youtube is an SPA. Seriously, the web has evolved, there's no point to these complaints. What exactly is the complaint again?


Hm.

Is it so hard to imagine someone could have a different opinion than you?

All of your examples are factual claims which I don't dispute. (well, assuming "instant results" is branding for "submits the query every time you hit a key, modulo some debouncing" -- not actually instant in practice). I disagree with their value, though. I noticed both of those changes, because stuff stopped working for me. The only change I've seen from Instant Results(tm) is that now sometimes when I'm scrolling through the search results, they'll all disappear and I need to type some more letters in the search box then remove them. Luckily you can avoid that by using the browser's search box instead of the website. On YouTube I had an issue for a while where leaving fullscreen after changing video would cause the page to reload, interrupting playback for 2-3 seconds. I believe that's mostly fixed now, bringing the functionality back up to par with the original implementation.

Maybe I'm just a hipster, but I like my computer to do what I tell it, not what some random web designer overly enamoured of his own "blog application" tells it. Most of the time, I want a document, not a portal or an experience. In that sense, I don't think anyone wants to go back to the web we had, so much as sideways to a better future.

EDIT: Forgot Facebook. Don't use it. Can't say. But the users sure seem pleased with the constant evolution of the UI.


>>> Is it so hard to imagine someone could have a different opinion than you?

It's not hard at all. I'm not in denial, I've read the original article even top to bottom. I know people like this exist. I just don't think it has any relevance today.

>>> not actually instant in practice

Don't take it so literal. The feature is called Google Instant. That's the name.

>>> I believe that's mostly fixed now, bringing the functionality back up to par with the original implementation.

Your argument is... that there are bugs? Yeah, nice one.

>>> Most of the time, I want a document, not a portal or an experience.

In which case, a document should be given to you.

But Google, Facebook, Youtube and many others including "blogging applications" do not offer documents. They are dynamically generated for you, client or server side.

>>> But the users sure seem pleased with the constant evolution of the UI.

I'm pretty sure they would be less happy if Facebook delivered simple documents with no styles attached. And I don't think they hate it as much as they say they do, as Facebook is still actively used.


that's an easy question... yes and no. You need them because many things are not available without one and you don't need them because anything can be done without one if the proper native applications exist... but that doesn't mean that these apps should or should not be built...

If the web would be served in a more structured way, we could have both. It's a pretty mess right now.


Yes, we do - for hypertext documents - as per original design. Retrofitted JavaScript - we don't.


HTML6, the structured web that can be consumed any way you like.


you only have to look at iTunes to see why this is a bad idea


Yes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: