Hacker News new | past | comments | ask | show | jobs | submit login
What should follow the web? (plan99.net)
167 points by telotortium on Sept 27, 2017 | hide | past | favorite | 168 comments



Though I disagree with many of the points made here, I love that this person is taking a position and putting in the real work to to lay out his argument for it.

I think the main point he's missing is that most platforms die due to lack of adoption. The ones that succeed must attract developers and the current state of software regrettably shows that developers (and users) aren't generally motivated by security concerns -- they're motivated by more fundamental questions like "can I get the job done quickly" (users) and "can I get paid" (developers) and (importantly) "can it run on an iPhone" (which kills most new app platforms immediately).

This post focuses a lot on tech, but platforms rarely succeed for having good tech. If history's any guide, they succeed by having adequate tech and a huge pile of marketing money.


Thanks!

I agree that most platforms die due to lack of adoption. However isn't this stating the obvious? All new platforms start out with no users. Some clearly gain adoption and don't die. That's not a reason to not think about new designs.

I disagree about marketing budget. The web came from CERN as an academic project. It didn't have a marketing budget. Its primary competitor at the time (AOL) had an enormous marketing budget. In the end it didn't matter.

I also agree that security doesn't motivate developers much right now. I'm thinking about this in terms of both security and productivity for that reason.

However, I think it's possible and likely for people to care more about security in future for three reasons:

1) The consequences of breaches seem to be getting worse. The Equifax C-Suite was just completely cleaned out due to, apparently, a fairly pedestrian XML deserialisation exploit in Apache Struts (which has had lots of them). Corporate America will be sitting up and taking notice of that. When top people start losing their jobs because of mistakes of programmers at the bottom, they'll start to care about security more.

2) The ultimate consequence will be the first major conflict in which 'cyber warfare' plays a part. I hate that term personally, but it's the one governments understand. The day a major industrial nation's power grid is shut down by a much smaller and weaker country is the day that everything will change with respect to computer security. If you have never considered what happens in such a scenario, google the term "Black Start" and learn just how difficult and complex it would be to bring a country back from the brink if its entire national grid had tripped out.

3) Part of the reason nobody seems to care about security is the sheer hopelessness of it. It's impossible to care about security when you know you're going to fail. All developers who aren't delusional know that sooner or later they will fail (and probably they'll never find out). The tools are so bad that it's pointless even trying to keep up with new exploit types. I hadn't even heard of SSRF exploits before last weekend and I read tech news obsessively. If I can't keep up, I don't trust anyone who isn't a full time security specialist to keep up, but unfortunately security isn't something neatly compartmentalised into a single person.

So what can we do?

• Prepare for the worst. There will be more major breaches and eventually some sort of water or grid collapse; it seems inevitable to me. Make sure you have plenty of paper cash at home in case of a breach of payment networks.

• Prepare for the day after the worst. Figure out tools and approaches we can start to use if/when security does become a more important issue.


I think the web is the lone exception.

Consider the other big software platforms from the last few decades: Android, iOS, Java, Windows. The trend is adequate-but-not-exceptional tech with rich companies pouring cash into their success. (iPhones have better than average tech in the hardware, so maybe there's hope that great tech will make up for lack of cash, but iPhone tech also required massive amounts of cash to develop and the software is also middling.) You can highlight any number of parallel attempts at platforms that arguably had better tech but failed.

I guess another way of phrasing my point is that good tech doesn't seem to be much of a factor in succeeding; ok tech with money succeeds, good tech without money fails. (Where "money" here maybe stands in for a bunch of related things, including "marketing", "building a market for developers", "attracting paying users", etc.)

I am sorry for the random sniping. I wish you luck and I will read your subsequent posts with interest. I write my comment because I would like to read some higher-level analysis (e.g. why will your project succeed when similar attempts X,Y,Z failed, what is different about the state of the market today vs last time).


First of all, great article, regardless of outcomes I've never considered an alternative web would be a possibility.

At first I was skeptic like most here, but it doesn't sound impossible at all. If you fix the most pressing issues regarding security, improve productivity by orders of magnitude (with great UX by default, even at the expense of some customizability), and can make it run on an iPhone, it might be a path worth exploring, yes.

Have you heard about Urbit? It seem to be also a "new internet" kind of project, but more focused on decentralization, which is very important too. I'd be interested to know your thoughts about it, as well as any progress about the "NewWeb" in case it becomes an actual project.


The name Urbit rings a bell, I think I read about it many years ago. But I just checked the website and it seems totally different, so I guess I need to refresh my knowledge of it.

edit: oh, they're doing an ICO. first impressions: not good


I don't know enough about Urbit's implementation or team to vouch for it, but I do find the basic idea promising: that we should have control over our own data, and should be able to do whatever we want with it.

I'm not sure why you though an ICO is a bad first impression. Yes, there were several scams but the blockchain space is full of very exciting opportunities. Some other commenter mentioned that a "new web" would have to provide sufficient value to the end user that the current version does not/can not. These experiments in decentralization are worth watching and probably where the "sufficient value" will come from to justify the use of new tech.


> they succeed by having adequate tech and a huge pile of marketing money.

The key to the web's success is its completely unparalleled levels of accessibility. All you have to do is remember a domain name and you have a portal into anyone's personal network application on any device; that feature can never be topped regardless of the tech's quality or the marketing budget.


I don't think most people or maybe even anybody remembers the domain name in order to access the pertinent network application.


I don't think that's true, certainly people rely on search engines more now than ever, but people still know the domain names of their most common destinations e.g google, facebook, reddit, amazon, gmail, ebay, github, youtube, pornhub aol etc. It's also pretty common to see people visit domains directly after being instructed to do so by advertising provided the domain name is easy to remember e.g. hotels.com, indeed.com, apple.com etc. Also, individuals quickly become familiar with sites that they visit habitually; certainly most users of gofundme pateron, paypal etc are not doing a google search to discover these URLs, they just type them in (and often the browser autocompletes it for them)


I was perhaps hyperbolic, after all I remember the domain name to access the network application, but I'm a programmer and I only do it for a limited number of applications. Otherwise I use one of the applications I remember to find the ones that are not among my most frequently used. In this way it calls into question the whole killer app thing of having a domain name, because in most cases you need something sitting between you and the domain name that gives you the right domain name for what you want loaded.


On the flip side, I've seen and heard of other people who have seen kids who type facebook in to the search bar to go to facebook.com. What fraction of people do this right now? What fraction of new internet users do this? It would be interesting to find out. Perhaps in the not too distant future, most internet users will be doing this.


> that feature can never be topped regardless of the tech's quality or the marketing budget.

Never? That's a silly thing to say in the computer industry. I'm going to guess that we'll have an alternative to the web sometime in the next 20 years.


"Never" should really be "not in the foreseeable future", but certainly not within two decades. The web is the apex of incumbent systems and there is just no clear path, desire, incentive or business need that would facilitate a transition to an alternative, especially as the web approaches full feature parity with "native" code. The only people that have irreconcilable problems with the web are a certain subset of programmer, but for most people an alternative is unnecessary and perhaps even the opposite of progress.


> The web is the apex of incumbent systems and there is just no clear path, desire, incentive or business need that would facilitate a transition to an alternative, especially as the web approaches full feature parity with "native" code.

I don't know about that. Considering the possibilities with AR/VR, quantum computing, machine learning, smart fabrics and environments, and the like and seems to me there could be an incentive for an alternative inside a 20 year timeframe. Will all that run over the web, or have a web interface? I tend to think at some point big technological changes will make an alternative appealing.

But maybe not and it will all be web assembly apps writing to a canvas container ;)


>possibilities with AR/VR,

The possibilities for multimedia and gaming are quite interesting, but there is no reason to suggest that this technology will have any impact on an any possible alternatives to the web.

>quantum computing

Quantum computing is in the "unforeseeable" category, we're not even entirely sure that quantum computing will ever be practical (as far as general purpose computing goes), certainly it has no implications with regard to an alternative to the web.

>machine learning

Same situation here. The machine learning discipline is expanding quickly and yielding very interesting developments, however, there's just no reason to suggest that machine learning will provide the impetus for a transition into a web alternative.

> smart fabrics and environments

Same here too.

All of these developments are orthogonal to the state of the web and none of them really provide any mechanism or incentive to create or move to an alternative.


I'm going to guess that you are correct, and that alternative will be called "world wide web" and run over HTTP(S). It will just be different somehow.


Surely there is an app for that.


I think it's called a web browser. ;)


Search, actually, hidden within the browser, aka 'navigational queries'. Significant portion of all search queries are just that.


> "can it run on an iPhone" (which kills most new app platforms immediately).

That nails it.

Going off tangent here:

On a macro scale, the devices in our hands are executing output from another machine. This output is usually processed by a browser or a native app.

To make our lives easier, browsers need to be more like OSes, otherwise, we have to write code for each platform for executing the same thing - like the present native app scenario. (aware that we can use hybrid dev, but they aren't entirely satisfactory). WebAssembly holds some promise it seems. Or does it?

Browsers will become more powerful and perhaps would become the most important part of an OS. Maybe eventually, the borders between the browser and the OS will merge and the huge collection of devices and servers would look like a distributed OS of sort.

Another thing is the tooling for present web dev scene which needs to lighten. Setting up tooling takes more work than actual dev at times ;)


Yes. The web succeeded in the early days because it gave people a way to put their thoughts out in the world when it was previously impossible. No one cared that the design was awful, that you had to code html by hand, that the urls where very readable and followed strict conventions. A new web does not have to be an object of technical beauty. It has to enable some form of human desire that is not fullfiled by the current web. One thing that I've thought about frequently is how the web enables global communities but how little it does to enhance local communities (based on physical closeness).


Maybe the future history of the web will be Chinese made. Multiple independent app platform on Android there. Also if standardization is slow and mired by incumbent company interests maybe something capable of a unilateral forward thrust into a big Market will have success


I'd also love to see him come up with some sample code, assuming the platform he envisions, with the types of APIs that he thinks should exist. I think the juxtaposition will be a lot clearer that way...


Yep I agree with your last paragraph. Just look at how Commodore went down the drain despite having superior technology.


I didn't see any work besides writing? It seems like he's working on a financial company, not advancing the open web.

I didn't see any mention of a plan to make it happen. It seemed more like "It would be nice if..."


Writing, thinking, researching, some coding.

My company is developing Corda and we're putting some of these ideas into practice there, although Corda is not a web competitor. In particular around RPC and serialisation.


> Components, modules and a great UI layout system — just like regular desktop or mobile development.

I don't understand this - I develop for all platforms (web, iOS, Android) and web is always much faster to develop for in terms of productivity. So much that a lot of mobile developers are starting to use web stacks (like react native).


Early in my career, having mastered Visual Basic I decided it was beneath me and I'd move on to more sophisticated tools like J2EE. When Ruby was born, Java's IDEs and static tooling became stupid, because Real Developers(TM) use raw editors like vim and emacs. Years have passed, I develop less and manage more, but I still use vim and I've moved on to yet other languages.

And now, 15 years into my career, I face the painful realization that VB remains the most productive environment I ever worked in. Yes, it was narrow. If Microsoft hadn't implemented an API for a feature, you waited for the next version.

Therefore, this question does give me pause: Why has the web failed to produce an experience as productive as several (Visual Delphi, anyone?) options over a decade ago? Competition is the web's strength. But it's also its undoing. Personally I think this article is Quixotic—and I prefer a decentralized, ineffecient (JS), sort-of secure (HTTP), competitive web to whatever top-down integration visionaries might push—but I think he—and many older developers—are asking the same question: What happened?


People often make this complaint with flash rather than with VB (although VB is also a nice example - thank you).

The web platform aims to solve a _very broad_ problem. I don't know if you've used VB for big projects - but as someone with lots of nostalgia for VB6 (and winforms) it got very painful around the 20K LoC mark and didn't really support complex UIs in a very efficient way.

There certainly _are_ visual builders for web design (literally hundreds - google "React UI builder" or "Angular UI builder" for some examples). They just haven't received adoption. I think it's because people always get weird UI requirements and the tools break. How many UI designers worked with you when you built UIs with VB? For me that number was 0.

In general, UI is designed today in sketch/photoshop and then a tool like Zeplin is used to export the design's properties to code automatically.


> people always get weird UI requirements

I would say this is dancing around the issue. The fact is, you need a unique UX, or your app will lose its identity in a sea of competitors. You can't build something interesting by using a library of off-the-shelf sameness.

Tools like VB and UI builders really shine with internal, corporate-only apps that put functionality way ahead of form. You can crank out requirements all day long, and the internal users really don't care much. It's just a tool to get their job done.


As a user, internal or not, I strongly wish that your app WOULD lose its identity. I don't want your app to have a separate identity; I want it to look and feel exactly like everything else I already know how to use, so I don't have to waste any additional time getting familiar with it. I don't want "something interesting", I want "a tool to get my job done", and the more you try to push a brand and make your app distinctive, the harder it is for me to plow past all that and get to what I really want to do.


You're absolutely right: the web and its breadth of tooling solves A LOT more problems than VB or Flash could ever dream of.


I disagree. I don't know about Flash and I just assume that VB was more or less like an underpowered Delphi. The problem was not tooling at all... at least with Delphi, please see my other comment.


But the web lacks a standard layout and design tooling that all these other platforms have had for a long time.


Same here, except replace VB with AS3/Flex. I've since moved on to learn ES5/6/7, rails, and react; react as concept is approaching flex in terms of productivity, but tooling is still not quite there. In terms of raw programming joy, I've not seen an editor as clean as FlashDevelop in years, and I say this as an avid Sublime Text 3 user. Sigh.


I fully agree with you, both on AS3 and FlashDevelop. This coming from a game developer.


Your career seems very similar to mine, I've somewhat recanted on visual designers too, having recently been maintaining some old desktop winforms/c# apps. I think you're being somewhat rose tinted though, those designers could and did shit all over your code from time to time, merging could be a huge headache too.

I still find the middle ground of hand typed code + a decent UI library (Qt, Gtk, maybe swing, etc) to be the best middle ground for productivity. I find it both productive enough in the short term and productive in the long term because it encourages componentization and doesn't encourage spaghetti code in click handlers.

Unfortunately we've mostly seen a transition form visual designers to xml/html and that is a step backwards.


I worked a lot with Delphi ("Visual" was not in the name, BTW) and I predicted more than 15 years ago in a mail list that web programming was going to completely eat most of the GUI programming.

What happened? Short version: 1) Everybody talks about productivity but nobody really cares 2) "Some" people care a lot about the price of programming tools 3) Web architecture provides for free a handful of requirements that were increasingly being demanded and were difficult to fulfill with desktop programming tools.


Well, let me tell you my story. The first company I worked in was developing and selling a ERP to several middle-sized companies- we had about 30/ 40 clients, with maybe half of them requesting customizations and new procedures to be added on a regular basis. We were just three developers, and the ERP was written in C. Not even a SQL database, the access to the DB was based on cursors. Sounds like a nightmare, right? And yet, that has definitely been the most productive environment I've ever worked in.

The secret was a well tested and never changing base library and hyper-specialized developers but especially no deviation whatsoever from the standards ("can we have that string in blue?" "no, not possible, sorry"), a very controlled environment to run on and strictly defined use cases.


Even in platforms that _have_ "NetBeans UI Builder" like tools people tend to not use them.

I and my iOS developer friends don't actually uses storyboards for actual UI in non-demo apps because it gets quirky and it's a lot easier to define from code. In Android (and other UI systems like WPF) declarative markup (XML) is often used directly with the view only used to see changes without having to compile (a problem web doesn't have, and Android is getting better at).


Xcode's Interface Builder is just dated. Try Unity3d's component-based UI system with live editing during the "Play mode". It's so great to build everything visually with a tool like this.


How is web faster to develop than native? There is so much lame reinvention of the wheel going on.


I don't have any experience developing for native, but I don't mind other people reinventing the wheel for web if it ends up being a solid package I can import into my own projects, saving me a lot of time. In short, npm is why I think people might say that.


If you're constantly reinventing the wheel on the web then you're doing it wrong. Especially with the widespread adoption of package managers.


Which wheels are being re-invented on the web?


Much of that is because the android tools are absolutely horrid, they tried to make things simple but ended up making them insanely complicated. Compare a hello world tutorial for android (http://www.tutorialspoint.com/android/android_hello_world_ex...) with one for the desktop (https://developer.gnome.org/gtk-tutorial/stable/c39.html#SEC...) to get an idea of just how crazy android is.


I think the major reason people use things like react is because of the siren song of code reuse. More often then not, you end up with a worse product and a lot of platform specific work around code.


Don't forget cheap/reusable JS developer workforce.


I admit I don't have much experience with web or mobile apps, but I thought that some UI elements like 'pull down to refresh' or 'bottom navigation' [0] are harder to get 100% right in a web app. Am I wrong?

[0] https://m.imgur.com/a/m63BW



In 1994 Sun engineers that worked on RPC published a "Note on Distributed Computing" [1] where they argue that hiding remote interactions behind interfaces that are designed to look exactly like local interfaces is not a good idea. RPC in itself is not bad, but only if the local<->remote interactions are exposed. Web with ajax makes message passing explicit and obvious, so even not experienced developers know which calls are remote (expensive and can fail). If these interactions are hidden behind RPC API the performance and reliability of apps may suffer.

The NewWeb seem to be inspired by past successful frameworks for building desktop applications, such as Delphi or Visual Basic. But these were local frameworks, NewWeb, being a local client - remote server, can't emulate these solutions by masking remote interactions, because it will end in the trap described by the Sun engineers.

[1] https://github.com/papers-we-love/papers-we-love/blob/master...


I think that advice is a little outdated. Not because it's incorrect, but because we've already taken care of that distinction with common patterns. If something returns a future/promise, it's a signal that the operation is at the very least expensive, and likely non-local. That's a pattern used pretty much everywhere now, and I think developers understand what the implications are.

Beyond that, 1994 was a time when RPC was rare in apps that people would run on their desktops. Developers needed a big fat notice that a function call could involve network access because it would be an exception to the norm. Now there are few apps that don't do some sort of RPC, and people are pretty used to dealing with the latency and error-prone nature of making network calls. (Not to say people are always good at dealing with this, but they're at least aware that they're doing RPC without much fanfare needed.)


There were tons of Delphi and Visual Basic projects written with DCOM and MTS in mind.

Also both had a plethora of components for network programming.

Finally many of those applications were a typical three tier architecture not much different than SPAs, but with a much higher developer happiness.


I know the OP isn't discussing this one until next time:

> 5. IDE oriented development

But it should also be practical to manually edit text files as a fallback, assuming one is OK with not having niceties such as auto-completion and on-the-fly error checking. Counter-examples are Apple's .pbxproj (Xcode project) and .xib/.storyboard (Interface Builder) files.

The reason I want this is in case part or all of the IDE is unusable or just unproductive for some developers. For example, as far as I know, blind programmers can't practically create a UI in Apple's Interface Builder. Hand-editing markup like XAML is OK though.


Build tools can always be replaced. Look at autotools, cmake, scons, etc.


Maybe the reason why a lot of “web apps” are so complex and huge has something to do with over engineering, developers loving lazyness as a good trait of their craft. Which results in undiscipline, then you add pressures from management to produce faster rather than focus on simpler solutions. I don’t see how these factors would not creep in into any “newweb” platform. Things are complex because of people, not machines. After all we made them so. Unless you have strict standards that are adhered to by lots of parties, i don’t see a way out of complexity, clutter, large size .js downloads. Then you add competition which does not have anything to do with rules, rather it loves breaking them to get a higher margin. That doesn’t really add simplicity to the mix. I happen to like the way the web is going and this “newweb” business sounds like the ever growing .js framework churn. I agree there are problems but definitely not structural enough to warrant a restructure. Then again that is the power of the web, anyone can pursue this.


Something that should be baked in to a "new web" should be anonymity and decentralization.

We need to find a way to get out of this big brother scenario we've got ourselves into.

A "new web" would be the perfect time to fix that.


Yes! We need a web on an overlay network, so nobody truly does know that "you're a dog".[0] Or where you are. For both users and websites.

0) http://knowyourmeme.com/memes/on-the-internet-nobody-knows-y...


Yes, I'm hoping something like IPFS is in our future.

Another killer feature would be the ability to never lose website versions or content(like git).

Also I hope we can finally do mesh networking(with IOT?) and remove reliance on ISPs.


Is the web not salvageable? Look at web assembly. The browser vendors added a VM and a bytecode format that runs on it. And they did it pretty quickly. In my mind, this is the way things are going to improve in web-land. Not "throw it all out and start over."

If someone/a company wants to build an "app browser" and associated protocols and publish it then the market will decide how that goes. I think it's a cool idea, but I doubt it would gain traction because almost everything enumerated in this post is a developer concern, not a user concern.

In software, there's always going to be that temptation to start all over again due to fundamental mistakes that were made years ago. But I feel there are far too many entangled interests in the web for that strategy to make sense. Hey, if someone wants to work on it, god bless them.


WebAssembly is symptomatic of the wider problems.

There have been many initiatives over the years to add more languages to the web platform than just JavaScript. I remember one from Google that tried to add a form of Java that had better integrated DOM access and didn't use a separate/traditional JVM (more like another mode on top of V8, iirc). There was NaCL, two attempts. Of course there were plugins as well. They were all killed off with justifications that ranged from the reasonable-sounding to the quite obviously ideological.

WebAssembly is the latest attempt to scale this wall, and what does it give us?

A new bytecode format that nothing uses and which is accepted by no hardware. It looks suspiciously similar to JVM bytecode, featuring a small set of bytecodes and a bytecode verifier, except that there's no support for expressing patterns that all high level languages have, like exceptions or garbage collected memory allocation.

The primary new capability, after years of work, seems to be that you can now embed C into a web page (as long as you bring your own C library, I guess) and run it slowly until it warms up. Although they handwavingly claim that in future they'll add GC support and that'll unlock support for other languages, it will come at a massive performance penalty that makes it effectively useless because such languages require complex JIT compiling runtimes to achieve good performance and you won't be able to do that sort of thing inside a wasm module.

The web's designers had many other options that would have been far more useful but they went with this one. What is their solution to avoid web apps now importing the world of manual memory management vulnerabilities? They don't have one: the best you get is preventing the C code from affecting the browser.

The only thing I find sadder than WebAssembly is the undeserved adulation it seems to be garnering from the web dev community. I get it - at least there's progress of sorts. But, sigh.

> I doubt it would gain traction because almost everything enumerated in this post is a developer concern, not a user concern.

That's true for any developer platform. Users ultimately don't care how software is built, unless problems of the platform hurts the quality of the apps.


AFAIK, WebAssembly is based on asm.js, which can be compiled from any LLVM language, not only C, using Emscripten. There are full implementations of Lua (https://daurnimator.github.io/lua.vm.js/lua.vm.js.html), Python (http://pypyjs.org) and even Java (http://teavm.org/) in asm.js. You don't need to write in the C language at all.


Yes, I'm using C as shorthand for "C and C-like languages", i.e. ahead of time compiled with manual memory management. It can do C++ too if you ship your own STL.

TeaVM has some sort of experimental wasm backend. Presumably it has to ship its own GC and can't do any kind of runtime profiling, which turns out to be a 20% win for Java and far, far more for other languages (like scripting languages).

My view is: if you were looking to bring to the web better performance and higher productivity, let alone more security, why would you combine low performance with low level languages? That's the worst of all worlds! And WebAssembly will be low performance for a while - they have to redo all the work done so far on optimising JIT compilers.


It's better than pretending Javascript is a "bytecode for the web," and transpiling to that. It's better than Flash or Java applets. If we want to consider the long-term archiving of software and keeping it runnable through the web (and I think we really really should consider that), we need something better than what we have, and WebAssembly seems like a step in the right direction.


> I doubt it would gain traction because almost everything enumerated in this post is a developer concern, not a user concern

I'm not sure this is true. My Mom doesn't really know the difference between an app and a website, but if I showed her the Facebook web app instead of the Facebook iPhone app I'm sure she would complain that it was not quite as responsive or didn't implement the same touch gestures.


> Is the web not salvageable? Look at web assembly. The browser vendors added a VM and a bytecode format that runs on it.

It only solves one part of a multipart equation: intermediate format for the web that multiple languages can compile to.


Web assembly is a step backwards, not forwards.


Care to expand on that?


View Source is a feature, not a bug. Having third party JS was bad, having third party wasm will be a lot worse.


I like the spirit of View Source, but in practice js is already obfuscated with minification.


Sure, but that's no reason to make it worse on purpose.


Not sure what you mean, WebAssembly supports debugging via source maps, so you can provide source if you like.


What if anyone in the world who had an idea for a "new web", could build it tonight, and have it used tomorrow?

My solution to the "new web" problem is radically different (though to be fair, trying to reinvent the web is in itself a radical idea).

I've posted my idea to many similar threads, so I apologize for repeating myself, but I feel pretty passionately about it and I feel it's a good idea to try to spread. Until I start receiving convincing evidence / arguments that the idea isn't worth spreading I'll probably continue.

Imagine if we didn't have to decide what the "new web" was going to be, BUT we did allow that experimentation to take place? I say we shouldn't make it a requirement to "convince people it's the right thing to do before it gets built and people start using it".

What if users didn't use "browsers". Instead they used "meta browsers". An application which hosts browser engines. And not only could apps / documents / etc. be downloaded by this "meta browser", but the experience of switching between browsers was also seamless to the end user. If they didn't already have that browser engine they would be prompted to download it if a particular app / document developer decided they were supporting it.

In this "new web", the "document / app" developer decides which browser engine the "meta browser" should render their app with.

What if a browser engine developer decided he didn't want to support RFC https://tools.ietf.org/html/rfc3986

It probably feels like they're on a suicide mission (for their browser), but why should it be a "requirement"? If the ideas are bigger and better and eventually catch on (ie. things we haven't even thought of yet) why should "the web" somehow dictate what core set of ideas are the right ones?


The WWW uses DNS. Think about it: DNS evolved as an upgraded distributed hosts.txt table. Fundamentally in an entry in that name-ledger is a point to a machine which you would log into. Who gets to change that ledger? Its static.

Now we have blockchain tech where names can point dynamically to content or cluster of machines. Ethereum ENS is an early version of this.

Imagine you would want a global map of places on the Internet itself. Like Open-map but so secure you can rely on it, so that self-driving cars can use it. Technically its not impossible anymore. The Web can't do that, because of single authorship of data (way less secure than multi-party authorship).

Ironically Mike Hearn has suggested such system with TradeNet [1], but apparently has missed what is happening with the evolution of blockchain tech.

The next web will be a transaction system, not a communication system - the former is a generalization of the latter. If you're interested in building these kinds of systems - we are startup building the foundations and are hiring.

[1] https://www.youtube.com/watch?v=MVyv4t0OKe4


My day job is working on Corda, which is a distributed ledger platform. So I haven't missed it.

When an app identity is based on a public key, you can start to do things like load them from a BitTorrent style network (if you want to). Or define a traditional CDN as the primary entry point but have slower/more decentralised systems as backups. App identity doesn't change so locally stored data and sessions are not lost.


Can you explain how your meta-browser should differ from an ordinary operating system? I.e. additional functionality it would have when not compared to a browser, but to the substrate browsers currently run on.


In a nutshell. It would only be a simple shell, with extremely minimal API (for user settings, etc), which could sandbox browser engines. This (a) makes "switching between browsers" a seamless experience to the end user, (b) makes the development experience much more friendly to the developer [since they're developing for their browser engine of choice].


Dear god why. What is the point of using the web if I'm going to target specific browser engines only? I may as well write native apps and target specific OSs only.

If your idea were implemented, I'd expect that in 30 years someone will propose the meta-meta-browser so developers can choose the meta-browser that hosts their chosen browser engine.


I don't get the impression you understood what I said.


>I've posted my idea to many similar threads, so I apologize for repeating myself, but I feel pretty passionately about it and I feel it's a good idea to try to spread. Until I start receiving convincing evidence / arguments that the idea isn't worth spreading I'll probably continue.

I didn't read a single word about the idea, but I already like you, because this is the force that drives the change, not that yet another framework based on rotten primitives.

My personal view on topic is that we are in a trap now. Nobody is able to build NewBrowser in sane amount of time (or lifetime at least), so web is the only alternative. But there already are techniques almost always overlooked despite being designed for exactly this sort of thing: virtualization. While wasm looks promising, it is just another waste of engineering in the shadow of hosted full-VMs. Virtual machines already can do anything that web has to provide. Isolation, hardware abstraction, time sharing, quotas, etc. Millions of vds hosting are functioning on the same principles our browsers trying to implement.

I think that newweb must be built as a virtual machine with a little difference that it could actually access the host system to interact with user data or other "sites" or "apps". Not directly, but via networking to 169.254.254.254 or something like that for streaming and shared memory (think DMA) for intensive tasks like video streaming and accel graphics.

For those who's unsure about that approach, I can tell you that I'm running Linux operating system app in a VM hosted by Windows 7 and it talks to host via built-in samba and regular sockets. Moreover, if I need slightly another setup, I simply clone my Linux so that it shares all the data (no copy) but holds distinct changes. The same way your new frontend could just cheap-clone existing VM and apply few fixes over it to actually implement the UI.

Of course, Linux is a bad example, having startup time comparable to modern website loading. It should be really fast and lightweight, I think something like DOS+OpenGL+sound driver should do the trick.


But how do you deploy VMs to the tightly controlled environment of mobile devices?

The good thing of web development today is that you can deploy the client side of any application to any smartphone, since all of them have been built with strong web browser support - and the browser is a fairly complete platform, with mayor browser being available and relatively compatible in all mobile and desktop systems. I don't know of any VM that shares those qualities.


As ARM-VPS are available, proper vitualization does work on both x86 and arm. It means that VT can be implemented on mobile (not sure for somewhat custom iphone chips though). The fact that it isn't yet implemented is no stopper. If all three mobile OSes said "there is your industrial-grade isolated vm, do anything you want and note the time/battery/memory restrictions" then problem would be solved. It should not even be that complex as real "virtualbox", because we don't need to emulate an entire PC, only ui-related parts of it.

>how do you deploy VM

Clone OS-provided vm, put binary into its address space from cache and/or network, run. It connects to host system and other vms via sockets/shm and does its job, sharing real hardware in host-defined, predictable way. Virtual private servers do that everyday.

Edit: interesting link I found searching for arm virtualization: https://gizmodo.com/5890453/can-vmware-put-a-virtual-machine...


> If all three mobile OSes said "there is your industrial-grade isolated vm, do anything you want and note the time/battery/memory restrictions" then problem would be solved

Yes, well, being theoretically possible is not the same as being viable and having solved the chicken-and-egg problem. I think webasm has nowadays more possibilities to become adopted as the standard for a common platform.


It is most probable outcome indeed.


I think it's a useful thought experiment. Some reasons why it seems like a bad idea:

a) Bugs. Browsers are already pretty much the most complex software out there, and they have tons of bugs. A meta-browser would be orders of magnitude more complex, and so likely have orders of magnitude more bugs.

b) Security. Special case of a) above. Browsers have had decades to gradually iron out security issues, and we still see periodic flare-ups. A meta-browser would start from scratch and be taking on a much bigger problem. You would need some way to allow people to add new engines without causing them to insert malicious code, interfere with other websites' browser engines, steal people's information, etc. It's hard enough protecting people when all malicious websites have is html and javascript. It's much worse when they have the ability to write Assembly. See https://en.wikipedia.org/wiki/ActiveX for an earlier attempt at this.

c) Adoption. Writing a browser engine is a lot of work, which is why there hasn't been a new one in a decade. If every web app had to provide a browser engine nobody would build web apps for your meta browser. Everyone would end up just using one of your default engines, at which point you're back to the current state of the world but with all the complexity of points a) and b) above.

It's worth backing up and asking yourself: what is the problem you're trying to solve. Then we can talk about whether it's really a problem and what the solution might look like, without immediately barreling down the first solution that comes to mind.


That idea is suggested in my article, in the paragraph where I suggest forking Chromium to add a new tab type.

Effectively you'd have web and newweb tabs side by side. You'd get some of the Chromium infrastructure 'for free', like user switching, the nice tab dragging code and so on. NewWeb tabs would not contain the URL bar, back button, reload button, bookmark star, extension buttons etc. But it might reduce some of the mental overhead of having to switch between 'browser' apps.


Thank you for the post, and finding and reading my comment. It's extremely thought provoking, and inspiring to know that these conversations are happening.

The "new tab type" idea sounds like it fits. In a way I see the "browser renaissance", that I think (hope) is going to happen within the next decade, is also more than just about sandboxing browser engines. When you follow the line of thought further I think the browser core becomes supported by a set of decoupled libraries which will be reused by different browser engines.

I think the toughest hurdle to this kind of thing is probably abstracting away the details, but still making it possible for end users to make educated / granular decisions so that they can understand more or less what the security implications of certain actions / settings would be. I imagine those 2 things (user knowledge and need for abstraction / shielding users from themselves) will eventually converge to a happy middle ground. But for starters could (for the least knowledgeable users) probably be something like providing a handful of options like "extra safe", "safe", "maybe trouble", "danger zone".

Though to be fair "danger zone" would probably mean something different than it historically would, since the "shell app / meta-browser" hosting the browser engines in theory would prevent an application from escaping its sandbox, but instead could allow an app, within the confines of the user's settings, to do things the user didn't expect.


You've got browsers today that are finally capable of doing everything from playing interactive media, running real applications, to displaying unstyled text documents. It's really good at all of those, and it's based off of widely distributed open source tech (how many browsers are there, HTTP servers, etc).

You can reach 2 billion people for pennies. What's more, this stuff isn't hard. Children can learn HTML and CSS, modern web frameworks make it so that you can build great stuff on the back or frontend in a matter of days or weeks. Seriously, write a small Sinatra or Flask app. It's easy.

Why is that not good enough? We really so ungrateful to believe that reinventing everything is the right step now? Nothing is stopping anyone from taking existing tech and building great, usable, lightweight sites. Seems like focusing on that would be about a billion times more productive, yeah? You can't really expect to throw away 30 years of ongoing progress and build a better experience in months or even years without losing something.

Just seems like a bunch of fluff. He'll never build anything like it. This is like sitting there in 1600 and saying that the printing press sucks because it doesn't encrypt documents. You've got so much potential that remains unharnessed (low-hanging fruit even), and we'd rather build a new system.


As someone still familiarizing himself with all the esoteric arcana that makes up modern web development, I'm always open to reading what-ifs. The modern web to me is WTF in a lot of ways. I remember reading an article on HN about the history of CSS that was quite fascinating. I have nothing against the author's article. It is pretty interesting and I even find his other blog posts quite illuminating to read.

IMO what's rubbing people the wrong way is not his specific points but how he is framing them. It really does not matter how brilliant his alternative is ... it's clear that HTTP is firmly entrenched and the network effects run so deep, it would require a miracle to really "start fresh". I'd really like to be proven wrong though.


I'm open to what ifs, but I've seen enough critiques of pretty basic stuff that he's got wrong to let me know that he doesn't really understand the web as well as he thinks he does, and therefore his criticism is pretty biased, and misleading, and unlikely to produce any real solutions.


Douglas Crockford, who understands the web, making similar points:

https://youtu.be/fQWRoLf7bns


Yea? Do you mind sharing the basic stuff that he's gotten wrong? Because I learned a few things and found the article to be well researched (but what do I know ...)



On the one hand, it's a common sentiment that we should just be grateful for how great things are today. See, for example, the famous comedy routine "Everything is Amazing and Nobody is Happy" by Louis CK [1].

On the other hand, would we have cell phones, ubiquitous credit card-capable point of sale machines, and airplanes with in-flight Internet, if people hadn't dared to dream that things could be better?

Yes, children can learn basic HTML, CSS, and JavaScript. But what if we had a platform that was easy to learn and not full of security pitfalls? One can dream. And the OP has some concrete suggestions for how to get there.

And sometimes, you have to drop existing systems, or aspects of them, to get to that better future. For example, on a cell phone, you don't actually get a dial tone and send DTMF in real time, as one did on a touch-tone landline phone. (This reminds me of jacquesm's contrast of DTMF and GSM [2], posted in the comment thread for part 1.)

[1]: https://www.youtube.com/watch?v=q8LaT5Iiwo4

[2]: https://news.ycombinator.com/item?id=15322196


It's not that I'm advocating being a luddite, stopping progress, or uncritically accepting the web, it's just that the most useful form of idealism (at least as far as I can see it), is realistic idealism because it's the most likely to be implemented and affect change.

It's just hard for me to understand how realistically a clean room implementation of NewWeb is gonna beat out something that's already had billions poured into it and even today is progressing. The much easier step is figuring out how to make the web better, but it isn't sexy. No matter what you come up with is gonna be imperfect and criticized and have holdouts anyway.


Yes, on reflection, I agree with you about that. After all, cell phones (my main analogy from the Louis CK sketch) are still compatible with the PSTN.


> Why is that not good enough?

I don't think "why isn't what we have good enough" has ever been the mantra of pioneers in their field, but ok

I don't get why you seem to take such exception to the stance that "we can do better". We can always do better.

Do you imagine 500 years from now the same technologies and web models in play as today? Even 100 years from now? Do you really think they are the pinnacle of the web.

If you really do, I'd sincerely love to read your counter blog post on this, as I reflect on this quite a bit and have generally always come to the conclusion that there's a long way to go.

> This is like sitting there in 1600 and saying that the printing press sucks because it doesn't encrypt documents

If that's so, isn't your comment akin to saying "why isn't the printing press good enough for you"? :)


> Why is that not good enough? We really so ungrateful to believe that reinventing everything is the right step now?

Just about every feature you list (apart from the distribution, got to give the web that) is stuff that we could do 20 years ago, so yes, I'm ungrateful that we've traveled so far but still gone no where, backwards in many ways.

> Children can learn HTML and CSS, modern web frameworks make it so that you can build great stuff on the back or frontend in a matter of days or weeks

Kids were doing that with basic since the 70's or 80's.


make me a p2p app that runs in a browser that has acceptable performance on mobile. I want soft real-time sync between n users with no central router or global data structures. I want to share arbitrary blobs up to 1MiB or so and stream larger data in with bit torrent. my traffic should be anonymized. I should also be able to automatically replicate my data across all of my of devices.


Perhaps go back to a sandbox VM model where you access an URI that's a program with attached data objects. Maybe have a standard for updatable "peripheral" modules, but provide standard sandboxed storage, display, messaging, & streaming APIs to start.


yes! this is actually something that i'm slowly hacking away at... and you've described the architecture pretty well!

https://www.heropunch.io/roadmap.html#finally-relax

site still has a few bugs (it isn't officially "out" yet, but its topical so why not).

the sandbox VM is a docker container with a standard interface for storage (relax/vfs), display (wayland), messaging (relax/pub).

vfs supports torrents and partial replication, so you're pretty much always streaming. we try to cache things to keep it fast, but different systems have different storage limits so we have to assume that we're always going to be fetching from the network.

applications can actually consist of multiple docker containers with different capabilities that can be scheduled on any host that has been paired with your user key. these can be updated individually or in tandem.

docker registries have to be explicitly trusted before you can run anything. we realize that docker isn't a security boundary by default, but we are attacking that problem from a few angles.

we implement a lot of this functionality as docker engine plugins. so you get anon overlay routing and distributed storage transparently from the perspective of the application.


That sounds pretty reasonable and interesting for prototyping, but also possibly heavy weight to run as a "browser". But I think it's basically what we seem to be moving towards anyway - if we could securely run it, the simplest model of interaction is to just download a program + data and run it.


for "browsing" you'd have a browser app along the lines of patchwork[0], might even be patchwork actually since we use the same core tech. i don't think they need to be the same thing, personally.

[0]: https://github.com/ssbc/patchwork/releases


You are thinking about how to evolve the web, while meantime the rest of the world has not yet discovered it. I mean people are still sending word documents via e-mail instead of making web documents in HTML and CSS and with an URL. When Dropbox came out you guys thought it was stupid, while normal people found it amazing that you no longer had to e-mail or print the documents in order to share them. People are using the web to search for information. Now we have to make it possible for them to also share information. Sure we have Twitter and Facebook, but that's not "the web" even though many people believe it is.


>people are still sending word documents via e-mail instead of making web documents in HTML and CSS and with an URL

What's so bad about this? I know how to make web documents but I still do this. Email attachments are available for as long as the recipient chooses to keep them, and can't be changed once they've been sent. Sometimes that's a feature.


Maybe a revision system can be baked in. Like the browser could show you what has changed since your last visit. And you could see how a web site looked like on a given date.


The web is a publishing medium. It sucks a lot as a sharing medium.

If your audience is any less than the entire mankind, sharing your data by the web requires an unending number of hacks and another medium where you can communicate how to jump through them.


Let's start with the protocol app:// and build from there.

http:// is for hypertext, app:// is for apps

app://microsoft.com/word should be explicit enough


That's not what protocols are for, though. It would still be served over http, so what you're looking to replace is actually the world wide web, or www. In other words, http://app.microsoft.com is what you meant.


That doesn't have to be true. An app protocol could have, for one example, a structured always open connection and a new set of useful verbs/actions that are more conducive to apps. It could tell the browser to use a different rendering path with a different more comprehensive base UI set for another example.

These are already possible by retrofitting existing technology but the question being asked is what would a reimagined, more app friendly web platform look like.


Though in practice, subdomains are rarely used in that sense anymore (both www. and ftp. are declining in usage), while the protocol is used to determine the program that should handle the request.

Maybe https+app://microsoft.com is better? (plus signs are valid in uri schemes)


I am not so sure that all protocol schemes require that the transfer be over http. How about if I type ftp://example.com in the browser? Or https? Or gopher? :)


They don't because those are different protocols. I'm assuming he means SPAs/PWAs/hell even XUL, and not some kind of byte code application.


No, it would not be served over http. Consider ftp://microsoft.com - is that also served over http?


What should app:// support that http:// doesn't? Or, from another direction, what should http:// have restricted that it currently allows that we can allow app:// to handle instead? (and let's ignore the impossibility of carrying that out, since the discussion might be informative anyway)


The question you have to answer if you want a web-like thing is "how do we solve the problems of the current web for devices like phones, tablets, and TVs?" Then you have something that can potentially stem the move of eyeballs from an open and linkable web back to a bunch of clustered-off apps.

In other words, I think the question to be answered first is the fundamental browsing UX challenge for non-desktop devices, where you don't have mouse/keyboard all the time and browser apps are less always-open.

The author is mostly focused on security, but that alone won't move big groups of users. Much of what he describes reads to me as copying the current UX state of phone apps, not leapfrogging them.



I think you're looking for exp:// https://expo.io/


(I work on Expo)

We just use this protocol/schema to make sure that development URL links are tappable on your phone. Checking a URL domain works for lots of deep links, but not when you have IP addresses or dynamically generated ngrok URLs (like when loading an app from XDE).

You might ask, why do this when you can just scan or screenshot the QR code? Fun bit of tech debt: the current QR-scanning system was added to support the offline-only create-react-native-app setup, and the "send a link by SMS" flow predates that. If you've received an SMS with your development URL, it's pretty useful to have it tappable.

Related, this detail of the XDE <-> client app connection isn't really what people use for production, it's just for development and sharing with coworkers in most cases. To ship a "production" Expo app, users typically build an IPA or APK for uploading to the App Store and/or Play Store, where this URL is hidden from them.


Your added value here is -1 character.


The reason the web is successful is because it is a relatively _small_ platform that you can build on in userland.

I think ideas like including a single binary serialization format as misguided at best - in 3 years we'll want to do something better and then what? Today we can just use protocol buffers on both sides if we please and be over with it - the web is about capabilities and not about APIs.


The web is not small by any stretch of the imagination. It already privileges three different generic data structure formats: XML, JSON and data-foo tagged HTML5. It also contains support for many different image formats, including an entire animated vector graphics language, several quirks modes, an entire OpenGL stack, a low level(ish) audio API, USB support, rich text editing that is hardly used because it's too buggy ... and peer to peer video streaming. That's, like, 1/10th of what the browser alone provides. But nobody uses just the browser these days, they all add frameworks on top!

But you know, that's also fine. There's nothing wrong with large platforms, especially if the reference implementation is open source. Big platforms do more work for the developer. The problem with the web platform is that it's such a bizarre mishmash of things, often designed in isolation without any specific app driving the platform forward. So you get design-by-committee standards that may or may not be implemented on any given browser.


> The problem with the web platform is that it's such a bizarre mishmash of things, often designed in isolation without any specific app driving the platform forward. So you get design-by-committee standards that may or may not be implemented on any given browser.

That called organic growth and evolution with natural selection. I find it a much better way than a group of coders sit down in their ivory tower and lay down the "proper" way how I should do things because they know better than me. Good luck with that.


What lives or dies isn't actually chosen by natural selection. It's chosen by a variety of arcane closed-door meetings between the major browser vendors where they agree or disagree to implement each others specs.


That's still natural selection, with browser vendors executing the selection part.

The problem with the web is ultimately the same as with everything else in this industry - it's that the fitness function sucks. And that, unfortunately, is caused by economic incentives being broken, which is not a trivially fixable problem.


I feel like the "bizarre mishmash" problem is really unavoidable. Any technology that is useful enough will gain widespread acceptance. Once that happens, lots more people want it do to lots more things. You can't keep the bad ideas out. That's just the way things go. I do, however, think that mature ecosystems find a good equilibrium between consistency and completeness after a while. It seems to me that the web is beginning to approach this.


For apps, I personally like React more than I'm willing to admit. Basically, React is MVw done right and has kindof won the JavaScript framework wars for me. My only wish would be that JSX be based on a standard syntax extension of JavaScript to handle markup literals in the language rather than using transpiling/ babel. And that React could be more readily used with a notion of web components (eg. layered on top of an independent component API).

For docs, OTOH, I think that anything going after this space has to be able to support all of HTML markup, given the massive installed base of browsers and web content. HTML as a markup language, however, has seen no love from WHATWG who focus on APIs instead, and W3C'S XML and RDF efforts for a more declarative web are epic fails. I've spent significant time to bring back SGML to the web, which remains the only standardized technology being able to completely parse HTML ([1]), and having the ability to define new markup vocabularies. One goal here is to come up with a way to publish content on the Web and on p2p at the same time, based on straightforward file replication, including for the case of limited forms of distributed authoring.

[1]: http://sgmljs.net/blog/blog1701.html


It's just client-side CQRS/ES with re-invented terms (due to ignorance or web bubble) slapped on top of a legacy scripting language and a document markup.


The part about SQL makes little sense to me. The author seems to be equating relational databases with SQL, but I don't know why. Yes, that's how everyone learns about relational databases (or the approximation thereof that SQL provides), but SQL is just one (not so awesome) language for interacting with a relational database. You could provide a binary protocol instead of or in addition to SQL for any relational database. The protocol is not the implementation!


You're right, I used SQL as a lazy shorthand for "relational database management system that definitely supports SQL but could also support other protocols in theory".


> To stress a point from my last article, text based protocols (not just JSON) have the fundamental flaw that buffer signalling is all ‘in band’, that is, to figure out where a piece of data ends you must read all of it looking for a particular character sequence

While this is indeed a drawback of many text-based protocols, there's no fundamental connection between it and being text-based. Many binary protocols and formats have the same drawback. Conversely, netstrings [1] don't require scanning for delimiters, escaping any special byte values, or special behavior for netstrings-within-netstrings. They could easily be used as the basis for a text protocol (and probably have been, somewhere; BitTorrent uses something similar but not identical, mixed with delimited fields, and not text-only).

[1] https://cr.yp.to/proto/netstrings.txt


tldr, a netstring is a string prefixed by it's length in ascii encoded decimal.


This has a "let's start all over" feel to it. That's usually not the optimal way to make big changes.

All of the design principals could be implemented and adopted incrementally within the current system (and to a greater or lesser degree have been). This gives a current system a massive competitive advantage. E.g, to the extent you could convince many people to use a certain unified data representation (we have many dozens to choose between!), they can adopt it without swapping out everything else at the same time. They don't need to switch to the newweb to get the part they want, so why would they? The entanglements of the current web make a wholesale switch to something else an extremely difficult sell.


> 4. Platform level user authentication

Sounds like an awful idea to me, I consider decentralized identities a huge feature of the web


Can you explain why email address client certs are less decentralised than what we have now?

In practice all web apps use the email address as the underlying identity. Email addresses are a decentralised identity, to a large extent. But the fact that it's so awkward for end users is pushing people towards solutions like Facebook and Google OAuth, which aren't bad, but it seems like a better design to let the browser present identity credentials instead of always having to go back to a big centralised network to get it. The scheme I propose is somewhat similar to Mozilla Persona, if that helps.


According to wikipedia:

> The decentralization aspects of the protocol reside in the theoretical support of any identity provider service, while in practice it seems to rely mainly on Mozilla's servers currently

So we'd be putting all our eggs into Mozilla's basket, and we'd be screwed if they got hacked.

There are also websites that do not use email addresses as identity, HN for instance.


I may have missed it, but what about shared computers like in a library? I wouldn't want the library browser to have my certs.


You wouldn't want the library computer to have your session cookies either. There are already mechanisms to do that sort of thing, like Incognito mode or "log off wipes cookies".


I, on the other hand, don't love managing dozens of passwords. And either way, a browser could easily offer multiple identities via multiple email accounts if you desired under their model.


How would this be meaningfully different from Java Web Start? Or Adobe AIR? Why would this succeed where those failed?

Why should we expect this platform to be better than Android?

So many choices we make when designing systems are tradeoffs "6 of one, half-dozen of the other." One side is not necessarily "better" than the other, because different tradeoffs are made. You add up thousands of these tradeoffs and now it appears that a system is full of cruft and should be thrown away.


Don't know about Adobe Air, but Java Web Start was bloated, slow, ugly, with a complicated API and poor security. Perhaps this new attempt would simply have a better implementation.

Currently the most common development approach is to develop a web app, then later an iOS and Android app. So there must be a deficiency with web apps if so many companies develop a native app even when they have already developed a web app. If Android apps also ran on Windows, Mac and iOS then they would most likely dominate web apps.


No mention of Ted Nelson or Xanadu?



There should be a name for the fallacy in this type of post -- perhaps the "boil the ocean fallacy".

Something as big as the web is necessarily evolved rather than designed. Obviously every programmer and their mother thinks they can design a better web. But that's not how large systems are built.

I take more of a pessimistic view of systems, as in:

https://en.wikipedia.org/wiki/Systemantics

This post seems to have a little too much optimism :)

In particular one slogan from that book is "large systems operate in a permanently degraded mode". They perform poorly and some part of them is always broken. There is massive redundancy and inefficiency.

That couldn't be more true of the web. (It's also true of things like governments and health care systems, but those things also serve their purpose to some extent.)

But this post does nothing to convince me why that won't be true of a replacement. It seems oblivious to the facts and forces of technological adoption. There is only one mention of the word "compatibility" in the post, and it's in regard to authentication. I'm not sure where all the binary protocol stuff is coming from.

Without compatibility, it's not very interesting to contemplate, because it will never happen.

There will be something beyond the web, but it won't be "like the web but with certain of my pet peeves fixed". It will have to have a fundamental new capability, like iOS/Android were to Microsoft Windows, which is another platform with an enormous network effect. Whatever new platform that comes along will coexist alongside the web for decades, just like iOS/Android and Windows do.

EDIT: The style and tone of the book can be annoying to some, but there is wisdom there. Here are some nuggets of wisdom relevant to the web:

- One of the problems that a system creates is that it becomes an entity onto itself that not only persists but expands and encroaches on areas beyond the original system's purview. -- This is everybody's complaint about the web evolving from a hypertext platform to an app platform.

- Complex systems tend to produce complex responses (not solutions) to problems. -- Every new technology introduced on top of the web (e.g. PHP, CGI, Java applets, Flash, CoffeeScript/TypeScript, PaaS, etc.) is better viewed as a temporary response than a solution.

- Great advances are not produced by systems designed to produce great advances. -- I believe this applies to the solution proposed in the original article.


I think that's why the web is so successful. No one likes the DOM APIs but they provide important capabilities and people just wrap them anyway.

The fact you can take the capabilities and evolve the APIs continuously is the big pro here and it's a lot easier to do on the web than on other platforms.

The "binary protocol stuff" is a great example of something the web _shouldn't_ do because it can be solved in a library (and already is) and doesn't add a capability.

WebAssembly on the other hand is great since it adds a capability.

At this point where Chromium has more LoC than the linux kernel I think "boiling the ocean" is a very good analogy - and the only thing that got a little close were mobile apps.


It seems as if the author makes a nod to that point:

>We also need to focus on cheapness. The web has vast teams of full time developers building it. Much of their work is duplicated or thrown away, some of what they’ve created can be reused, and small amounts of new development might be possible … but by and large any NewWeb will have to be assembled out of bits of software that already exist. Beggars can’t be choosers.


Well, I'm talking more about design than implementation.

It's good he acknowledges the point about reusing existing code. But the design still seems to be ignorant of the network effects of the web, or any existing platform. It seems completely incompatible, so I don't see why anyone would use this new platform.

A good analogy: to break the Windows monopoly, superior hardware and OS X being superior barely made a dent, if count by percentages.

What you really need is iOS, i.e. brand new functionality. That's how Apple "beat" Microsoft. Platforms are generally not "replaced" with something similar but slightly better.

(And what he's describing sounds better on some fronts but worse on others.)


When Android and iOS were new, they were completely incompatible with everything else including the web. Yet somehow I find myself using Android apps every day and many mobile websites I visit try to push me towards their app version.

Console makers reset their platforms from scratch every ten years or so. New operating system, new hardware platform, often entirely new CPU architecture. Xbox is still around.

I think some people may be over-estimating the difficulty of dislodging the web. Obviously it'd be a long term endeavour. But web apps don't have a lot of network effects. If anything it's the opposite; web apps hardly integrate with each other at all, so using one rarely makes another better.


Yes, that's exactly my point... There will be new platforms, but they need significantly new functionality, like being able to carry around the Internet in your pocket.

There's a reason Steve Jobs worked hard to get Google Search, Maps and YouTube on the iPhone, and writing a completely new web browser optimized for it. The iPhone was bootstrapped off the web to some extent, and open protocols like e-mail. (It didn't have apps in the first iteration.)

Or they need a proven customer base which is willing to shell out money for games.

The web is the highly unusual case, because it was not developed by a big company, but by a grass roots effort. It was a distributed RFC/Usenet-type process as far as I can tell. But the web offered significantly new functionality -- at the time nobody in the world was using networked hypertext.

The ideas laid out in the post seem heavy on developer pet peeves rather than user functionality / monetary incentives.

As much as it pains me to say this, security is almost a developer pet peeve, in the sense that users generally don't adopt products for security reasons. I think there's evidence they use products in spite of bad security.

Of course, I would very much like a more secure web. I think it would be a worthy goal to attack only that portion, while leaving out all the rest of your proposal. As far as length prefixes, it probably makes sense for the control channel like HTTP headers, but not for the data channel like HTML.


Isn't iOS a good example of my point? Steve Jobs tried the "just write web apps, we'll keep native to ourselves" line. It was rejected by developers.

I think users will happily assign some value to security if it's offered in a meaningful way. The virus-resistance of iPhones (and to a lesser extent Android) is definitely seen as a selling point by many people, mostly for the iPad.

A lot of developers think users don't care about security. I think what is happening is that users aren't being offered real security. Rather, developers say "can I spend X hours this week on security" but the value delivered as perceived by the boss or customer is often very unclear and is often seen as near-zero, because people don't really believe programmers can write secure software. Additionally, security is often seen as a bottomless pit into which you can empty money forever with no perceptible difference in the product. The cost is far, far too high, and the likelyhood of being "secure" in the end is far, far too low.

If a platform gains a reputation for security though, as iOS has done, then people will start to trust that programmers actually mean it when they say "we can make this secure" and users will start to care more.

I think it would be a worthy goal to attack only that portion, while leaving out all the rest of your proposal.

It can't be done. Otherwise I'd have made such a proposal. The web is so deeply flawed, security wise, nearly nothing can be preserved. It all has to be reset.

For instance, switching to binary for the HTTP headers is already done in HTTP/2, but that hasn't had any noticeable impact on the rate of security vulnerabilities in web apps. It helps a bit: perhaps HTTP/2 is invulnerable to header splitting attacks (I'd hope so!). But the rest of the stack is still seriously flawed and that's where the bulk of the attacks are.


In my opinion we should eliminate shared data formats as much as possible. Instead, try to define the core of this distributed system in a minimal way, perhaps it only needs an inter module messaging system and a module naming system. All other services are not standard but just modules.

So something like the browser is replaced by a micro-kernel style program and all other functionality is in outside processes. Each 'program' has a unique name in a global namespace - they are are downloaded as needed and instantiated. The programming paradigms and stacks of each program are not prescribed. Programs instantiate other programs by the global name as necessary, etc.


There are some interesting points. I might be wrong but it seems that the author would like to see a narrow set of technologies mandated from the front end to the backend, much more than HTTP/HTML/CSS/JavaScript (it's pretty much all that matters now). This didn't happen with the web for a reason. You can mandate end-to-end technologies when you completely control a platform (iOS, Android) but control and open web are contradictory. At best a controlled stack can succeed in a controlled subset of the web. The "do as you want" approach is what made the web successful (plus ease of deployment.) So, for example, keep sending data to the browser in the format you prefer. There are many, some good, some bad, appropriate to different needs. There is no need to mandate one which anyway is going to be obsoleted quickly. That would stifle innovation.

And about removing the back button from the browser

> the back button can be provided by the app itself when it makes sense to do so, a la iOS.

No thanks, IMHO that's the worst UI design decision ever made by Apple. I always have to figure how to go back to the previous screen anytime somebody hands me an iPhone. On Android and in the browser is just the back button, always there in the same position.

About sessions and authentication:

> NewWeb would not use cookies. It’s better to use a public/private keypair to identify a session.

> [...]

> A client side TLS certificate is sufficient to implement a basic single sign-on system

I'm open to those. I would generate a different key pair for every site (no SSO over the whole Internet please), add a passphrase and that's it. They'll probably going to ask me my email for the user profile so we're back to email and password but at least I'll only have to paste a password from the password manager. Note that I don't use the same email for every site (personal email, business email, customer's email, @mailinator etc) so using different keys is really important to keep separate identities. I guess it's not going to be a simple transition. Cookies provide sessions over a domain, I wonder how it's going to work with keys. Maybe "use this key for all the .x.y domain"?


WeChat.


Why not just improve Gopher?


>> Here’s my personal list of the best 5 things about the web: Deployment & sandboxing

Shallow learning curve for users and developers

Eliminates the document / app dichotomy

Advanced styling and branding

Open source and free to use

I would add:

write once, run anywhere; dynamically adapt to any viewport; incremental transfer of resources; no deployment; allows linking of views and resources


I think MS got closer to this with Windows RT and its apps. They had/have a central store which causing some problems but that could be expanded.

The apps ran in a sandbox which allowed more than you might want but new, more restrictive, sandboxes could be developed.



The Web is an incredibly productive environment. HTML/CSS, javascript, AJAX are the core tech that make the web so productive. are there any platforms can that can be developed in less time than the web? Loose typing is not bad for productivity -> it's good for productivity.

The problem with the web is when people try to add libraries and frameworks that don't make sense for what your building - and alot of these frameworkss are just really bad for productivity, like Angular 1, etc.


I like this far better than the first post, at least the author is proposing some concrete ideas that can be debated, which I think is worthwhile.

However actually going and attempting to implement this idea from whole cloth is a fool's errand. I'm not really cynical by nature, so it pains to be that guy, but in this case I just think it belies a complete naivete about how software ecosystems evolve and thrive, and why the web in particular was successful.

The web you see today bears almost no resemblance to the original proposal. What got the ball rolling was a very simple document format and protocol that made it much easier to share documents. That was a unique value-add over Email/FTP/IRC/Usenet/Gopher that drove early adoption. The reason the web is so crufty is because all the application stuff was bolted on after the fact, but here's the rub—if they had designed it for applications from the beginning it would not have been too complicated and wouldn't have taken off. There were tons of proper app development platforms that could have formed the basis for "proper" cross-platform GUI app development, but none of them could cross the chasm to true ubiquity the way the web has.

By the late 90s, the amount of effort being invested in the web was far greater than any single company or organization could ever bring to bear on a problem. 20 years later it is a deep sedimentary stack of more technologies than anyone can really keep track of, and yes it's been quite tortured and abused, and often feels extremely janky.

But tempting as it might be to think you can design something better, you really can't. You can't solve all the problems the web has solved. It will seem good when you start, but as you go you run up against the edge cases, and the scope grows, and then you have to bring in more people to solve the new problems, and you start to lose control of the design again, and you end up with an entirely new mess that people write hang-wringing blog posts about. Either that or you play benevolent dictator and try to control everything, but then adoption slows and it becomes more of a proprietary platform that never gains ubiquity.

The only tractable way to improve the web is to pick an area and work on improving it, it's not as satisfying because you have to much cruft, but it is possible to slowly improve things. It takes a long time this way, but you are leveraging the millions (billions?) of man-hours that have gone into web technologies to date.

That's not to say the web is invincible, yes the web might be replaced, but it won't be replaced by someone building a better app platform (that will never gain traction, mark my words). Instead, it will be something unexpected, it will be a simple use case that, in unforeseeable ways, over time, creates an entirely new ecosystem that simply makes the web irrelevant.


> There were tons of proper app development platforms that could have formed the basis for "proper" cross-platform GUI app development, but none of them could cross the chasm to true ubiquity the way the web has.

I think this is the heart of our disagreement.

App platforms that were actually designed, like iOS, Android, Java etc have all been very popular. Smartphones are ubiquitous, but the smartphone that bet most heavily on the web platform (Palm Pre) was wiped out by the smartphones that bet on relatively cleanroom designs.

Heck, the Android API isn't going win many awards for simplicity or elegance, but the BeOS and Danger people had OS design experience and pretty much knew what they were doing. Android gets far more right (in my view) than it gets wrong. The entire "app revolution" that followed the iPhone launch was more or less a huge slap in the face to the web platform. If it was really so hard to beat the web, why is all the innovation on smartphones in the native app space, with web devs getting the crumbs years later after various Apple/Google/Microsoft talking shops have finished finalising and shipping WebTiltSensor or whatever feature we have in mind?

The key difference between mobile and desktop, in my view, is deployment. Android and iOS handle deployment and upgrade for you. Desktop platforms only started trying to tackle that recently, and mostly screwed it up. The web has a great deployment story.


> If it was really so hard to beat the web, why is all the innovation on smartphones in the native app space, with web devs getting the crumbs years later after various Apple/Google/Microsoft talking shops have finished finalising and shipping WebTiltSensor or whatever feature we have in mind?

Because platforms have different strengths. It's obviously not hard to design a better platform than the web for apps. What I'm arguing is that you can't take one of those clean room implementation and turn it into a truly cross-platform standard.

As popular as iOS, Android, Java, Flash, QT and whatever else are or have been, they are still hamstrung by being single-vendor efforts. The web on the other hand is table stakes for any new computing device, on the manufacturer's dime—it's not cost center or support burden for the platform "owner". This is a world of difference that's hard to overstate.

> The entire "app revolution" that followed the iPhone launch was more or less a huge slap in the face to the web platform.

Why? Again, they have different strengths. If you need high performance, and access to hardware then you have to go native. Of course all innovation will happen in closed environments where the vendor can control the full stack and move quickly. There was no way a loose set of open standards like the web can compete with that, the results should surprise no one.

But does this mean apps are going to eclipse the web? No! Because there is still a high threshold of trust to install an app. As much as the web security model is a mess, it also has been reasonably successful at isolating the hardware environment from the on-demand functionality it delivers. Can you imagine a world where you only ever install apps for everything and never use the web? Even if you use Google and Facebook's apps, there will always be a long tail that you aren't willing to install, and the web is there for that use case. Because of this, companies have to continue developing websites to serve the top of the funnel, and as they do so, they will demand more and more standards to tie into hardware etc, so slowly web standards will creep in and commoditize functionality which today is only available via native APIs.

More fundamentally, I think you underestimate the sort of worse-is-better strength of the web being document-centric but support app-like functionality. There are far more documents than apps in the world, and many of the apps deal with things resembling documents. So the web has this incredible low barrier to entry where you can throw some documents online, and then slowly build functionality around it. If you're working on apps all day, and living in SV, it's easy to see the warts and lose sight of what a powerful dynamic this is for web adoption.

I hope you can prove me wrong and come up with an idea that can revolutionize the web, it's just that it runs counter to my observation about the forces that shape standards and technology ecosystems at a higher level than individual minds and platforms.


My prediction: Nothing is going to succeed the web.


Probably something like Johnny Mnemonic


So...Android? I mean, I get there's a lot of differences, and Android, the platform, would have to be much more open to begin to compete with the web. But, it ticks off an awful lot of the boxes he's named. If one could run any app in the world on Android without installing it and without significant delay, it'd be a reasonable substitute for the web, I guess (which it can, because it has a web browser, but we're back to the previously listed failings of the web).

Of course, maybe one can argue that openness is the killer feature of the web and unless Google really embraces open, it'll never surpass the web.

But, I still don't buy it. I was critical of the first article, and I'm maybe even more convinced now that the web won't be beaten by a "better 90s", which seems to be what's being proposed, in many regards. The author seems to want to reset the clock just before the web exploded, and build something new from where we were in about 1994, but with a bunch of lessons learned from the web.

Some specific areas of contention I have:

1. IDE-oriented is a terrible idea, IMHO. IDEs are fine, I guess, but they always strike me as being indicative of insufficient/incorrect abstraction, rather than being a great productivity booster.

2. I think designing UIs for the web is today better than for desktop apps. Maybe this is indicative of my lack of experience with desktop apps, but even when I've tinkered with Android development, I found UI builders to be tedious and frustrating, especially when it comes to making them responsive. With Flexbox and Grid, and a component library, I can whip up a nice, responsive UI in minutes for the web...and I'm not even good at it!

3. On data representation and binary formats, I tend to think everybody settling on JSON is Good Enough. It's slightly sub-optimal, but it's standardized on the front and back ends, it is universal, and it can be compressed using standard universally available tools. To me, it looks like a classic "Worse is Better" scenario. Anything else will take years to work its way into common usage.

Which brings me to the biggest issue: By the time a new platform reaches even a tiny fraction of the reach of the web, the web will have likely caught up. It is very difficult to overstate how fast the web as a platform is moving today. No single developer can fathom how advanced a lot of the tools have become just in the past couple of years. I've been sort of giving myself a crash course in Node/React/etc. lately, and it's, frankly, astonishing. You can go down any of dozens of rabbit holes and find incredibly powerful tools in all sorts of domains. Building apps for the web is stupidly easy once you get over the (admittedly large) learning curve for putting all the pieces together and overcome analysis paralysis and stop reading about every new library and framework.

I guess I remain unconvinced. I come away feeling the same optimism I had for the web before reading (maybe even more, because for nearly every point he makes, I can see a clear path where someone is already working on solving the problem for the web, or it already exists in a rudimentary form), and the same strong doubt that anything can beat the web short of being the result of the continuing evolution of the web.


Seems like Urbit ticks at least some of these boxes


gopher


RIP you glorious protocol


I'm honestly waiting for the return of Gopher. I'm not kidding.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: