At this rate, I predict that about ten years from now, they'll just ditch the bloody memory-leaking browser and just ship out a VM and a language that will have a small GUI library with a DOM-like organization in the standard library compiling for it.
Sure, there are going to be nitpickers who will point out that we had that in 2005 and it was called Java, but what do they know about innovation.
I'm aware you're being satirical but it's an interesting argument which I discussed with a friend yesterday.
Java-in-the-browser wasn't a terrible idea per se. Some aspects of it were poor but the tech landscape back then was very different both in capability and in politics.
Indeed. The JVM-in-applet downsides (slow startup, memory hog due to fixed heap, unresizable, stuck in its box) were mostly contingent rather than necessary. There's no reason those mistakes have to be made again.
But lets discuss further, we also should admit the reality, not just reject with satirical statements.
Java didn't dominate the web front-end technologies. It's worth to consider why. The points I see:
- user needs to download and install JRE
- bad integration with HTML document,
applet is just a rectangular self contained box
on the page; and it's very inconvenient to
script DOM nodes, handle events from applets
- browser vendors (especially microsoft)
were against java becoming the dominating platform
- in 2005 java was the only language running on JVM (unlike today).
BTW, I wouldn't say java was very slow to startup.
In the '90s, I despised Java and kept applets turned off because they took forever to download, and once downloaded, would make my machine freeze for a few minutes while the JVM started up.
It's not so much of a problem now that we all have broadband and fast CPUs, but in the '90s it was enough for me to bin Java as "a slow piece of shit" and avoid it like the plague. That probably contributed to it not taking off as a client-side web platform.
My point was that, eventually, it won't even be a web frontend. They'll just get the web browser out of the equation. Of course, a web "widget" is still going to be there, it just won't be a program you open.
I do believe Java failed mostly due to politics -- and more precisely, due to this:
> browser vendors (especially microsoft) were against java becoming the dominating platform
Everyone agreed, in principle, that a portable, high-performance VM was what we needed. The problem was that every vendor insisted it had to be theirs, while ever so slightly sabotaging other vendors.
In the meantime, they all had to provide a working web browser.
Well, probably you are right. In this case it is the main lesson to learn - if we want something to happen, we must think how to "hack" the social system, how to refactor the political/social situation. Technically a common platform for applications is not a difficult problem. BTW, I do not blame Microsoft more than others; as you say, every vendor tried to sabotage others, including Sun who didn't suggest a solution sufficiently beneficial, or unavoidable, for everyone.
Java's actually a lot older than that, 1995 I believe. Though I'm not sure when Java applets became available in browsers (certainly before 2005). It's a sobering thought that whilst (in my opinion) Java applets were a very good idea, the implementation left much to be desired and it's going to take us more than 20 years to turn those ideas into a good workable solution (of which WebAssembly may be a part).
Unfortunately, every other web designer today seems to think I don't want to read a bloody article, but rather to be engaged by an interactive article-reading application that's basically impossible to distinguish from native applications, except for those eighty quirks that are definitely going to be solved by morehacks.js and those new CSS perversions.
Web browser developers seem to cater towards those needs, which is how we ended up with browsers where I can run fifty gazillion floating-point instructions per second in JavaScript but it takes me five seconds to find a bookmark, three of which are spent hovering over the titlebar until I remember there's no menubar anymore.
I surf with JavaScript disabled by default and selectively enabled for a few frequently used sites that directly benefit from it. I rarely find sites that are unreadable without JS--certainly less often than I used to experience sites that were unreadable because of it. On the rare occasions I find a site that wont work without JS (most common symptom:completely blank page) my decision more often than not is to close the tab and move on with my life. I don't think I'm missing out on much and my computer's fans no longer scream constantly when my machine is idle with the usual dozens of open tabs.
I've run into several sites that have weird issues without Javascript, but they seem to always be things that could have been implemented with traditional markup: missing form components, misplaced images, things like that. I'd say about 50 percent of the time[1], however, media-focused sites with complicated image-viewing "galleries" or a more obscure video player, are totally useless without it.
It can be frustrating to have to go through this process of navigating to a site, realizing I've broken it, and then reloading with all the crap turned back on, but yeah, like you said, it's better than having my CPU revved up just to have those "SIGN UP FOR OUR NEWSLETTERS!" modals flying around the screen.
That's because the designers of most web pages want the features they provide. When you say "optional" you don't actually mean optional, you mean removed. Such a thing already exists, it's called Gopher[1].
Most websites today don't even gracefully degrade. One of the trendy blog/article sites that gets posted here regularly (it might be Medium) is just a column of text with lots of whitespace, but the text is loaded via AJAX, so without JS, you can't even read it.
HA! That would actually be lovely. Something easily-discovered, like a "isactualhypertextnotaturingtarpit" attribute for the <meta> tag (OK, maybe something less verbose) would probably solve half the problem.
Java has survived these 3 points. I think that Java is an excellent language that is dying because Oracle has not handled correctly security issues and because Oracle has a very bad reputation.
Security ought to be the strong point of Java, not its weak point.
It survived but it didn't dominate the way that was expected. In the late '90s, Java was positioned and expected to be in the place that webapps live today - that is, something within your browser that would handle interactive live client/server applications.
If you are kidding (I just can't decide), be aware that Microsoft killed Netscape at the 90's because Netscape was starting to sell exactly that VM.
When Netscape became Mozilla, it shipped with XUL runner, that is exactly that, but nobody used it.
Firefox OS is that, again, sold for smartphones.
If they come out with a good VM (measured by the languages it can run), and a good DOM-like organization, it will happen in no time. But if we get another XUL, it just won't happen.
| When Netscape became Mozilla, it shipped with XUL runner, that is exactly that, but nobody used it.
i would hazard to disagree with that statement - the company i was working for at the time shipped thousands of devices our with the XUL runner, and i wrote several XPCOM objects to support the custom hardware we were shipping at the time.
i still miss some of the niceties that came from that.
I think you'll be disappointing. I predict that the browser in its current form will not be going away, no matter how much some people may wish for it.
Serious question, because a lot of people have had this sentiment; what is it you think you'll get from WebAssembly as far as new languages that you don't already get from transpile-to-JS languages today? There's everything from very dynamic languages like Opal to typed languages like Elm. What else do you expect?
Efficiency and ease of debugging? Just because it's possible to transpile to JS doesn't mean the resulting code will be fast/responsive or easy to debug. Which in turn means it won't be used for significant projects.
Debugging will use SourceMaps which already exist today. As for efficiency, I haven't read anything that suggests WebAssembly will be more efficient than asm.js, why would it be?
"Why: asm.js is great, but once engines optimize for it, the parser becomes the hot spot — very hot on mobile devices. Transport compression is required and saves bandwidth, but decompression before parsing hurts."
Imagine you create a JVM or CLR to WebAssembly converter. All languages and tooling available for them are instantly available for web. Life's good ;-)
Sorry but without Apple a technology like this would go nowhere. Half the mobile traffic on the web comes from iOS and is significant enough on Desktop. And is there something Apple has done with WebKit to deliberately prevent a technology from being adopted ? They supported WebGL for example.
Apple doesnt give a shit about native apps vs web apps. Recall that the original plan for 3rd party apps on the iPhone was web apps only--they only allowed native apps because so many people clamored for them.
JavascriptCore is a top-notch javascript engine for the web. If Web Assembly takes off, Apple will just integrate Swift with it, ship a low-power, high-performance engine, and keep selling a ton of devices.
It would also be the end of user side scripts. It will enable the worst of Flash, Silverlight, Java. Horrible DRM schemes and other user hostile features will be built with this. It's a step back.
I think the author is referring to using Swift instead of phonegap/cordova/transpilers, since performance will be improved and you're not limited to just javascript as the originating language.
I don't think so. I think compiled and interpreted languages both have a role to play in web applications, just like they do on the desktop. edit: and on the server.
I actually don't see a problem with using a document renderer for applications, assuming it's actually treated that way. The document-rendering of a browser is lower-level than a UI toolkit, and that makes it more powerful in a lot of ways. But we need to be able to build abstractions on top of it to actually build UIs. I think react.js's approach (or something similar) is actually pretty good here.
A lot of the problems we run into are people treating DOM nodes as if they were UI widgets, when in reality they are lower-level primitives for drawing and capturing input.
Is the purpose of this to make ad blocking harder? It's much harder to work on an executable form than a declarative one like HTML/CSS.
Really, how many useful web pages need to execute much code? Games, sure. Other than that, not so much. Is this trip really necessary? We just got rid of Flash, after all.
If. All languages you listed need GC, which won't be in the initial version of WebAssembly. In other words, you won't be able to compile Ruby to WebAssembly any time soon.
You will be able to compile Ruby implementation to WebAssembly, but you can already compile that to asm.js. That is different from compiling Ruby to WebAssembly.
Yes and no. You can have native VM for running WebAssembly's Ruby VM... Oh my god, that sounds horrible as I wrote it.. but it can certainly be better than asm.js.
JIT (generating code in memory as data and jumping to it) is one of a few things not very well supported in asm.js and presumably the initial version of WebAssembly, so Dart probably can't use the initial version and should wait.
WebAssembly (and asm.js) is not really a "VM" as in JVM or .NET in that it does not provide services they do, like GC and adaptive JIT. I think the best description of WebAssembly is a reincarnation of p-code.
It is very much a VM, in that it executes generic bytecode. Furthermore the Javascript engines of modern browsers (on which this proposal is based) do JIT JS. Not having GC doesn't mean it's not a VM.
Quoting the wikipedia article of p-code:
> In computer programming, a p-code machine ... is a virtual machine designed to execute p-code
Yes, it is a VM, but not "VM as in JVM or .NET". "VM" in Bob Nystrom's article is used as a later sense. Hence the article mostly does not apply to WebAssembly.
On the topic of JIT, the whole point of WebAssembly (and asm.js) is that it is so simple that you don't need JIT to execute it. You can AOT it fine.
Don't get me wrong; I like JavaScript. It has its warts, and it can cause all kinds of problems in the wrong hands, but both of those things are true of all popular languages.
However, performance has always been a sore spot for the current JS-to-machine-instruction process. Being able to write code that will execute faster is important, regardless of the language that code was originally written in.
Sigh, seriously all this front end stuff is becoming a PITA. Maybe it's just me but I still find the web a mess today. Lately I've been feeling the app stores sound more appealing than actual web development. JS this, 5 new frameworks a week that, some byte code tech... All this change is making me nauseous. Time to stick to the back-end I guess, maybe with a pinch of native apps.
Would it be wrong to interpret this as the possible end of browser focused engineers?
It seems to me that consumers have spoken that they prefer native apps (especially mobile ones) that they can quickly find on their phone's home screen. Consumers are using desktop web apps less comparatively to native mobile ones.
So rather than people making web apps and then trying to port them to native, we're going to have a bunch of native apps that port to the web?
OR
... maybe web goes back to being the defacto standard if people start making insane unity engine powered byte code based web apps that perform like native..
Wow.
If C/C++ code can run at near native speeds on most browsers, specially mobile browsers, this could make a dent on app stores revenues, specially for IAP based games.
You will still be using sandboxed DOM APIs (that adds both overhead and restricted API access), you won't have access to app store stuff like in-app purchases and your adds will be blocked by adblock that will be shipping built-in to browsers.
The draft design documents state that any WebAssembly code will have an equivalent text representation - so that is almost comparable to human readable source code. It will not be a linear list of instructions like an assembly source code but will be structured as WebAssembly is supposed to have an AST representation. [1]
I cannot see why FSF will have problem with this then, since if you are distributing a .wasm file you can construct an isomorphic text source code representation from it. This is in spirit with the user should get the source code of whatever they are executing. (Though I agree, it would not be equally readable as original C/C++ source code from the .wasm file was compiled, but then something is better than nothing)
The FSF is very clear what they mean by "source code"; the GPL defines it as "the preferred form of the work for making modifications to it". Decompiled .wasm isn't the preferred form (as you say yourself it's not going to be as readable as the original), so it's not source code, any more than the output of a Java decompiler or a disassembler would be.
From what I've seen, the current GNU approach to JavaScript emphasizes encouraging websites to use code that is actually free software (i.e. legally redistributable, etc.) and marked as such; I expect they consider that more meaningful than merely being technically able to look at the source.[1][2] You can do that just as well with assembled code, so I don't think they'd care that much, but I can't speak for them.
I don't think it makes much difference from an open/legal point of view. In fact it could clarify things: any F/OSS project compiling down to this target will have the source somewhere available and anyone wanting not to join the party can keep their code obfuscated. In fact it would make just copy+pasting OS code into proprietary projects less convenient and remove the "how was I to know?" defence: anyone using the code would have had to search out the repo and could not have missed the licensing terms unless stupid or deliberately ignoring them.
There's nothing stopping people distributing closed-source obfuscated scripts at the moment. There's not a massive difference between minified javascript and decompiled asm.js.
Am I the only one who thinks there actually is quite a difference between minified & obfuscated javascript vs bytecode?
Minification & obfuscation can only do so much without changing the logic of the script. Sure, spaces are removed, variable and function names make no sense, but most of the logic is still there just as the developer intended.
If we look at compiled Java/C++/Any high level language code, how the application logic is presented is substantially different from the original logic of the application. Making it so much harder to understand how the application works.
I disagree. Obfuscation is essentially the same as a separate compilation and can do as little or as much to distort the original logic as compiling to a lower level language.
E.g. an obfuscator could make all method calls into one identical named overload, while a compiler could emit appropriately named subroutines. The compiler preserves the logic better in this case.
Minimization is something different, but minimizers do not attempt obfuscation, it is more of a side effect of their goal.
Not much of a difference if you're running your JS code through Googles closure (https://developers.google.com/closure/), this basically does a complete re-compilation of your JS, removing dead code, unused variables, and lots of other optimizations, and in the end spits out a big ASCII blob that could just as well be byte code. It has nothing in common with the original source code.
And then you throw JSNice at it, give it to a college student, wait a weekend, and have a nicely readable source version.
Source: Am college student, for fun I disassemble websites, including the funny VM Google built for ReCaptcha.
Additionally, I wonder what the EU thinks about this, as anyone who has the ability to use a software has the right to take it apart, inspect it, and learn from it. This right can not be signed away with contracts (making the "Do not decompile" clause invalid) and is violated by all these closed source web projects.
Tbh, I should probably just decompile, deobfuscate and refactor the Google Inbox client source, and publish it on GitHub over summer break, just to show Google how useless and annoying their obfuscation is.
Try view source on any Google web app, like Inbox. For all intents and purposes it's pretty much closed-source already - deciphering that mess is a nightmare
it can be a nightmare but I did extract some small features from gmail... they minify names and things like that but they also add bloat in their obfuscation process. But I still think that a binary web would be a step backward.
The WebAssembly team decided to go with a binary
format because that code can be compressed even
more than the standard JavaScript text files and
because it’s much faster for the engine to decode
the binary format
Are there real world examples of websites where the size of the JavaScript and the time it needs to decode it make up a significant portion of the load time?
Did flash need a final blow? I assumed it is still around in small pockets because of projects that are abandoned and casual developers who don't intend to learn a new set of tools.
Make this thing mainstream and, yes, there will be plenty.
Why would you compile your application into x86 code if you can compile into WebAssembly, distribute inline on your web page, and just cache it at the client's machine?
(Yes, there'll be plenty of whys. There'll also be plenty of applications for what none of the whys apply.)
Does this eventually end up as yet another virtualization layer? How big and bloated does the web stack become before we start again, with a lean protocol and markup language for text documents?
WebAssembly will have a backwards JS polyfill, but browsers could implement WebAssembly natively and gut the JS middle layer. Mozilla (and then Microsoft and Apple) did the same thing with asm.js. WebAssembly becomes the unifying abstraction layer instead of JS.
The lean protocol and markup language will still be there to use though. This just gives us a new tool that is right for a different set of jobs and hopefully integrates well with the other parts when needed.
But they are leaner than piles of javascript which is in turn leaner (at least from a dev stack point of view) than a full compilation stack and process.
We have quite a range of useful options for our needs now:
* Plain text. Sometimes it is all you need. Just server text/plain via HTTP and be done. Add markdown or similar as you see fit.
* HTML+CSS for most occasions because you want to add at least a little style.
* Add in a little JS when you need some basic interactivity or because you are presenting a lot of information and the option to hide/collapse some of it is useful to the user.
* Add a lot of JS and libraries once you are getting into proper "application" territory rather than just a fancy page of information.
* Start compiling form other languages when for your project needs some constructs not available or easily emulatable.
* Start considering WebAssembly when your project really really needs the performance gains possible over JIT compiled JS (or you want the binary format for obfuscation purposes)
None of those options invalidates the ones before it, so it really does come down to having a good selection of tools and picking the right one for the job. The fact it is all cross-platform (at least it is if you ignore "legacy" browsers like IE6 and Android 2.3.x, don't mind using less efficient polyfills for the not-quite-legacy-yet options, and are careful to remain compatible with a range of screen sizes and (once you introduce interactivity) input methods) and the big names appear to be playing nice enough is icing on the cake.
I think we are living in good times in this respect.
Sure, there are going to be nitpickers who will point out that we had that in 2005 and it was called Java, but what do they know about innovation.