We've already moved past ARM and x86: there's a bunch of MIPS Android tablets and phones on the market (just not in the West), and MIPS is extremely popular in home appliances (e.g. most LCD TVs).
But this is besides the point. The real question is what the web will look like even 10 years from now, with a large installed base of arch-specific binaries everywhere. I don't want to have to explain to my kids why their browser requires the equivalent of a PDP11 emulator installed to watch certain classic movies and games.
We haven't moved past ARM and x86. Those two cover almost the entire mobile/tablet/desktop market.
If there's a real MIPS resurgence outside of WiFi routers (eg, extremely cheap/low-quality tablets), then they can use PNaCL just fine until a native NaCL target is implemented.
However, given the market weight behind ARM and x86, this seems extraordinarily unlikely to be used outside of some extreme edge players. IIRC, MIPS support isn't even mainlined into Android.
Right, so by 2040 we can't distribute an application without rebuilding it for 5 or 6 different architectures, one of which is a bitcode format that doesn't even run at native speeds, despite the name.
I really love where this is going.
[Edit]
You're basically demonstrating the entire problem with the Native Client camp - you're so myopically focused on the immediate needs and the immediate environment (arm! x86! games! performance this year!) that you have absolutely no appreciation for the trail of destruction being left for future generations. You have Pepper which is a huge bastardized, specialized clone of all WebGL/HTML 5 audio/media APIs, you have a totally separate offline manifest format that doesn't work like a regular web app, then you have 20 varieties of dumb formats pushed by a single company to advance its private for-profit agenda, when we already had all the vendor-neutral components openly specified, in place and deployed, that just needed some incremental improvement.
"The Internet was done so well that most people think of it as a natural resource like the Pacific Ocean, rather than something that was man-made. When was the last time a technology with a scale like that was so error-free? The Web, in comparison, is a joke. The Web was done by amateurs." -Alan Kay
The problem with your camp is that you're so averse to learning anything but the technologies you already know that you're completely blind to their deficiencies. Javascript is a mediocre-at-best, terribly-flawed-by-design technology that is pushed forward only by its sheer ubiquity. The sooner we replace it with something better, the sooner the entire web benefits.
I'd love if JS would receive a rewrite. But when you get down to it, JS isn't so horrible as you put it. In fact it is a very elegant language burdened by its initial design constrictions (i.e. forced to look like Java, tight time budget). So I don't understand all the hate.
Except, we don't need an elegant language (and Javascript is hardly elegant, by the way). We need a virtual machine, and that's essentially what Mozilla is trying to define as a DSL in Javascript with asm.js. Instead, why not create something better from scratch?
Look past surface (asm.js or JS syntax) to substance. Browsers (not just Firefox) have a VM, with multiple implementations and amazing, continuously improving performance.
Any from-scratch VM could be great, and I hope someone skilled (not a big company big-team effort) does one. Short of a new market-dominating browser, good luck getting such a new VM adopted and then standardized with two or more interoperable implementations cross-browser.
Shortest-path evolution can get stuck, as some here fear, but JS is nowhere near stuck.
Indeed JS is evolving before our eyes into a multi-language VM. asm.js and Source Maps are just a start, and we could have a bytecode syntax (or AST encoding standard, alternatively) down the road.
But any clean slate work along those surface-bytecode-language lines, not to say a whole new VM, is high-risk and low-adoption (at first, and probably for a while) by definition, compared to small evolutionary jumps for JS syntax and optimized parsers and JIT'ing (and with asm.js, AOT'ing) runtimes already in the field and auto-updated in modern browsers.
I'm fine with the democratization of the VM as long as they are standards-based (and compliant). If you look at the comparison of web framework performance benchmarks, they tend to group around their VM even when compared cross-VM, or comparing frameworks built on top of other frameworks which share a base VM (e.g. vert.x is about as capable as servlet in the Java VM world; ExpressJS is about as capable as Node.js (framework) in the Google V8 world).
It seems to me that the VM is very important. One of JavaScript's latest, greatest triumphs has been its standardization. I care much more about -- for example -- Backbone.js's impact on code organization/standardization within an application than I do about its capability as a framework.
I hope that ECMAScript 6 -- while it brings awesome new functionality to the language -- will also bring with it more backwards compatibility and standardization that these frameworks currently provide (in a somewhat fragmented, yet digestible way).
And I hope the same for the democratization of the VM.
> Right, so by 2040 we can't distribute an application without rebuilding it for 5 or 6 different architectures
5 or 6? 2040? All we've seen over the past 20 years is the gradual reduction of in-use architectures down to a few winning players. We're even seeing it in the SoC/MCU space now, where custom architectures are being replaced with ARM.
I expect to continue to see evolution and simplification, not revolution. If there is a revolution, well, we'll want to take advantage of the power of that fancy new hardware, I suppose.
> one of which is a bitcode format that doesn't even run at native speeds, despite the name.
PNaCL generates native code from the bitcode.
[response to your edit]
> You're basically demonstrating the entire problem with the Native Client camp - you're so myopically focused on the immediate needs and the immediate environment (arm! x86! games! performance this year!) that you have absolutely no appreciation for the trail of destruction being left for future generations.
I would say that you're demonstrating why the web camp continues to fail so horribly to move past simple applications: you're so myopically focused on the ideology of tomorrow over providing a great user experience today.
In fact, I day say you're leaving a far worse trail of destruction, because web applications are the ultimate expression of always-on DRM. When the servers for your "apps" disappear in a few years time, the applications will disappear right along with them -- forever.
How is the Internet Archive going to archive your webapp?
> You have Pepper which is a huge bastardized, specialized clone of all WebGL/HTML 5 audio/media APIs
That's an overstatement of Pepper. It's quite simple -- far more so than the massive browser stack that WebGL+HTML5 brings with it.
In fact, Pepper is simple and constrained enough that someone other than an existing browser vendor or major corporation could actually implement and support a standalone implementation of it. By contrast, producing a compatible browser is such an enormous undertaking that even companies like Opera are giving up and tracking WebKit.
>> You're so myopically focused on the ideology of tomorrow over providing a great user experience today.
> Through ActiveX Technologies, today's static Web pages come alive with a new generation of active content, including animation, 3-D virtual reality, video and other multimedia content. ActiveX Technologies embrace Internet standards and will be delivered on multiple platforms, giving users a rich, open framework for innovation while taking full advantage of their investments in applications, tools and source code. "ActiveX brings together the best of the Internet and the best of the PC," said Paul Maritz, group vice president of the platforms group at Microsoft. "Users want the richest, most compelling content delivered through the ubiquitous Internet."
— Microsoft press release, March 1996.
We've seen this all before, and it was as much of a land grab using similar "user experience" excuses then as it is now. And despite it all, everything from this press release is possible with open technologies today already. Now we come to address the performance issue, and platform proponents (let's drop the pretence – platform shills) continue to fiercely reject it, because providing a good user experience was never the original agenda.
I'm deeply reminded of the leaked internal DART e-mail ( https://gist.github.com/paulmillr/1208618 ) where as much ink was expended on marketshare as it was on technical issues.
ActiveX was technologically flawed: it was specific to Microsoft Windows, and it was insecure.
NaCL is secure and portable, without sacrificing the user experience today.
Indirectly referring to me as a platform shill is unwarranted. The reason that I argue so strenuously for maximizing performance is simply that performance improves user experience.
> I'm deeply reminded of the leaked internal DART e-mail ( https://gist.github.com/paulmillr/1208618 ) where as much ink was expended on marketshare as it was on technical issues.
I've read the whole memo, and I don't understand what your complaint is at all. Google seems to be focused on actually improving the web as an application development target, and making it competitive with modern platforms.
By your own words Native Client is technically flawed: it's specific to x86 and ARM (and whatever other vaporware platforms they promise "next week" – PNaCl has been due for years already), and removing the VM, it's also insecure. So basically Native Client is ActiveX inside a VM.
How exactly is that by my own words? You also seem to be confusing quite a few technical topics.
First, you're equating ARM and x86 with Microsoft Windows. Those are all "platforms", but they exist at very different levels of the technology stack, and the impact of relying on them is quite different.
Samsung and Apple can both ship compatible ARM devices. Only Microsoft could ship an ActiveX runtime.
Second, you're throwing out "security" as a red herring without actually explaining what is insecure. Exactly what are you talking about?
Lastly, if PNaCL is not viable, then NaCL is not viable. Thus, Google is holding off on NaCL until PNaCL is viable. In the meantime, I'm targeting mobile and desktop applications -- today -- which is entirely consistent with my earlier stated position.
"ActiveX is definitely going cross-platform," Willis says. "To be a viable technology for the Internet, we need to be able to support more platforms than Microsoft Windows. You cannot predict what kind of platforms will be on the other end."
Which part of ActiveX wasn't portable? COM is definitely portable. Shipping signed executables that implement COM interfaces is as "portable" as NaCL is, so no problems there.
If your complaint is that ActiveX controls used the Win32 API, then you're missing the point. A Mac or Linux ActiveX control would use OS-native APIs, but still use COM and be an ActiveX control.
Furthermore, if you had even looked up the Wikipedia page for ActiveX, you'd see that:
"On 17 October 1996, Microsoft announced availability of the beta release of the Microsoft® ActiveX Software Development Kit (SDK) for the Macintosh."
Yep, so non-portable that they released a Mac SDK back when Macs were PowerPC. Super non-portable. The non-portablest.
ActiveX was portable the way that the SYSV/SVR4 binaries were portable -- almost entirely in theory.
"It was introduced in 1996 and is commonly used in its Windows operating system. In principle it is not dependent on Microsoft Windows, but in practice, most ActiveX controls require either Microsoft Windows or a Windows emulator. Most also require the client to be running on Intel x86 hardware, because they contain compiled code."
What rational comparison do you have between NaCL's constrained syscall interface + Pepper, and ActiveX?
> Yep, so non-portable that they released a Mac SDK back when Macs were PowerPC. Super non-portable. The non-portablest.
Which produced binaries that couldn't run on any other PPC OS other than Mac OS, and provided a runtime that couldn't run the ActiveX controls being deployed for Windows clients, even if it was somehow magically possible to run x86 code.
It was explicitly advertised as being non-portable, as a feature: "Developers can create ActiveX Controls for the Mac that integrate with other components and leverage the full functionality of the Mac operating system, taking advantage of features such as Apple QuickTime."
Not due to technical constraints, unlike ActiveX. It's a self-fulfilling prophesy -- Mozilla's refusal to adopt NaCL will, indeed, stall the adoption of NaCL.
Things might get more interesting if NaCL provides a higher performance deployment target for Chrome users than asm.js does. Then you can just cross-compile for both.
Last I checked, Pepper was full of underdefined stuff that depended on details of Chrome internals. So no, it's not "simple and constrained" enough that you can implement it without reverse-engineering said internals....
A simple example: What happens if negative values are passed for the top_left argument of Graphics2D::PaintImageData? The API documentation doesn't say.
I just picked a class and function at random.... That sort of thing is all over the API.
I think there will be - maybe not at the ARM popularity level, or even x86's level, but it's been acquired by Imagination and I'm sure they have pretty big plans for it.
But this is besides the point. The real question is what the web will look like even 10 years from now, with a large installed base of arch-specific binaries everywhere. I don't want to have to explain to my kids why their browser requires the equivalent of a PDP11 emulator installed to watch certain classic movies and games.