This one I'm fine with since WebAssembly is a worthy replacement, but I'm still annoyed at Google discontinuing Chrome Apps.
Some examples of specialized apps I use all the time that would require a native app otherwise:
- Signal Desktop
- TeamViewer
- Postman
- SSH client
- Cleanflight drone configuration tool
It was one of the best things that happened to Linux desktops in a long time and removing it hurts users and makes them less secure.
Now everyone is moving to Electron and instead of one Chrome instance, I'm now running five which use more than one GB of RAM each. Much less secure, too, since each has its own auto-updater or repository and instead of being sandboxed by Chrome's sandbox, they're all running with full permissions.
It also means I cannot longer use Signal Desktop on my work device since installing native apps is forbidden for good reasons, while Chrome Apps are okay.
It also hurts Chrome OS users since Chrome Apps are being abandoned in favor of Electron. It also makes it less useful for developers to create Chrome Apps since the market is much smaller.
Since Chrome Apps continue to be available on Chrome OS, I'm considering separating that functionality into a stand-alone runtime or making a custom build for Linux. Anyone wants to help with that?
Exactly. I doubt most of the ChromeApps that exist today were made for ChromeOS. They were made because it was an easy and straight forward way of making a webapp on the desktop. Now, as OP mentions, everyone is moving to Electron and NWJS, neither of which works directly on ChromeOS.
The only upside I can see is that CrOS is soon going to support running Android Apps which may save it, but even then... Maybe they'll figure out a way to run Electron/NWJS apps on ChromeOS?
OP isn't calling out how it hurts CrOS, they're describing how it's the place that Chrome Apps are still available, and thus could have that functionality copied out of CrOS and rejiggered to (continue to) work in Linux.
It may take a bit longer before it is disabled everywhere, but I feel like the writing is pretty much on the wall for NaCl at this point; If you develop or depend on apps that leverage it (in any context), this should probably be a warning sign to start thinking about how to sever that dependency (even if it's not urgent).
At a minimum, Postman does not belong on that list. It has been the case for quite some time now that Postman's desktop app is a) better than the Chrome App, and b) recommended by the developer.
Postman aside, I don't understand why anyone would want any of the apps you listed to be installed as a Chrome App. Why would someone want an SSH or TeamViewer client to be tied to their browser? You're installing an application either way; Chrome internalizing the process as an attempt to offer convenience is a strange idea. Chrome is a browser, not an operating system - let the OS do what it does best. Chrome Apps were an unnecessary and proprietary mess - a failed experiment I am happy to see dismissed.
Ad blockers primarily look at domains, so blocking will continue to be possible at the request level. They aren't interpreting or parsing JS to begin with.
> - stronger DRM
If sites were going to ship Web Assembly-based DRM, they would already be shipping Web Assembly along with the Emterpreter. Remember that wasm has a polyfill already. I haven't seen that happening, so I see no reason to believe it'll happen in the future.
> - Bitcoin mining that regular user can't detect
A regular user certainly would notice the 100% CPU consumption. And anyway, bitcoin mining in a WebGL shader would be more profitable than anything wasm-based.
Moreover, though, surreptitious bitcoin mining on consumer PCs would be ludicrously unprofitable no matter what. Here a Stack Overflow answer from last year that calculates how much a site with 2M daily visitors would make if they could all somehow run the fastest C implementation [1]. It was less than 50 cents a day back then, and in the meantime the hash rate has grown by nearly an order of magnitude [2]. Good luck.
You are right, but replace Bitcoin with the latest Fadcoin and it might get lucrative again.
One cool idea that is also gaining traction is in-browser proof of work to prevent DDOS attacks. Basically you have to perform a lengthy computation to get past the (fast, ultra-high bandwidth) firewall. Doesn't slow down the individual user much, but makes an attack much more difficult. I could imagine people using malicious JS to get these proof-of-work tokens.
But yeah, computing power in the cloud has become fairly cheap, it's really hard to see how criminals could benefit from secretly serving WebAssembly to people.
I doubt a regular user would notice 100% CPU usage and even less certain that they would know what to do about it or what was causing it. Most OSes operate chrome just fine when another process is asking for 100% as well.
You must have been working with at least quite young people or something. My experience goes the other way. I've seen people who bought a new computer because their fans were running at max and the reason for that was that they were full with dust and dog hair.
Right, they might notice it, but why would they care? They don't know that they should care, it is just a computer being a computer and it probably spins up the fans for other tasks too. They would be right to not care unless told otherwise, sys ops isn't their job.
I think it would be smart to educate people as part of a regular security briefing for non technical staff though. But if it's something that high of concern to your company maybe an automated CPU usage monitor could alert the team to anomalies.
[1] I was running a test today when the program being tested ran into a very tight loop and the fan really kicked in. I was curious if it would finish and let it run for several minutes until all went quiet and the screen went dark. It had shutdown to prevent heat damage.
Regular users do notice things like that, they just can't articulate it. "I think my computers getting old" and "I think I have a virus" are common ways of trying to explain things like this.
I'm not sure where you're getting any of this from.
First, I don't see why you think WASM-based ads will be any less blockable than JS already is.
Stronger DRM on the web [is already a thing][1], but that has nothing to do with WASM.
Bitcoin mining on the web is already possible with JS and so far that hasn't been an issue. If it does start becoming a widespread problem though, browsers will start taking steps to combat it, like they [already have][2] for pages that needlessly consume CPU in the background.
That's actually a great idea for funding content creation online. If a site is open source, then it would even be possible to prove that only a reasonable share of the client's resources are being utilized. I'd take that funding model over the advertising-driven model that exists now.
WASM is aiming to be fast enough to actually be capable of running a rendering engine. No more DOM for adblockers to look at, the pages may finally become canvases.
Right, but you can already do that. Serve the whole page as a SVG, or JPEG, or PDF. You can already embed ads inline. Or just plain serve them from your own server, calling them article.jpg instead of http:://ads.adcompany.com/advertisement.jpg .
The reason people don't do this, is because ad companies want control. They want to know exactly how often the ads are served and they don't trust you to serve their ad all the time and to all customers. Also they want to rotate it quickly. Finally, they want to track people. So the solution is to use remote javascript. 99% of ads work this way, even mayor news sites don't sell their own ads anymore.
In this scheme, you'd probably still have ads served from a third party server. And even if they obfuscated the domain name, you could still probably identify their blob and block it.
So I'm not to worried about unblockable ads. But you're right, they will try, and I am worried about sites becoming unusable because they will emulate browsers using canvas, poorly.
If you run your own rendering engine you lose accessibility. If you add aria tags to stuff in order to make it accessible, adblockers will be able to read them as well.
I don't think advertisers care about accessibility. Not the same companies that want auto-starting video clips in your browser.
As for the site owners - unless it's someone large enough to care, I think the usual choice between "letting a few disabled persons access the site" and "getting a few more bucks from advertising" would be quite obvious. If that question would even arise.
Don't most ad blockers just rely on media queries on the DOM? I imagine there's a lot of ways to circumvent those techniques when you are rendering raw pixels with wasm.
Those pixels still have to go somewhere in the DOM, and clicks on those pixels still need to be handled. Plus, there's nothing wasm can render that you can't render with uglified JS already.
Put it this way: once you factored in the price of the bus fare to get to the public library to borrow a terminal to check your giant botnet's bitcoin earnings, your criminal enterprise would be operating at a loss.
CPU mining hasn't been viable since 2011 when bitcoin difficulty was below 1,000,000, and today it's near 600,000,000,000.
Turn your website into an interactive canvas, basically an HTML5 Canvas game that functions like a news site, I think it would be much more difficult to block ads in this scenario. One objection is that you can already do this and we don't see it, but a counterpoint is that its a hassle at the moment, and once tooling is advanced C# to WASM compiler or something along those lines, then we'll see frameworks and then it will be pretty simple and possibly compelling... If it gives publishers more control over the experience I don't see how they could pass it up.
Agreed. WebAssembly is not a replacement; the point was not to allow to write extensions in native language, the point was to have a mechanism for escaping the browser sandbox in a controlled way.
Google could have allowed Chrome to be used as a cross-platform GUI library (THE cross platform GUI library), but left it to Electron (lagging behind and requiring distribution); think XUL runner. I don't see the sense in that. I'd absolutely love a modern XUL runner.
Electron is really just Chrome with a separate JS engine that can run native code if the developer chooses to do so.
Personally I'd argue that the "nativeness" of an application depends on how much the developer actually uses that ability to run native code. If you just throw a bunch of standard webapp CSS, JS and HTML in a folder and wrap Chrome around it, it's no more "native" than any other webapp. If on the other hand you have a whole bunch of native code doing, for example, media editing and the HTML and such is just the frontend UI, I'd say sure, that's a native app.
I'm extremely hesitant about installing random non-sandboxed applications. A Chrome app comes sandboxed with well-defined permissions. It's rare for native apps outside of mobile to come sandboxed or be easy to sandbox.
I always wonder what it is that these apps need that cannot be done as regular web apps, as the web platform has provided more and more controlled ways to break out of the browser sandbox. (This is not rethorical by the way.)
eg: I'd like to build an expense tracker with html/css/js/sqlite but I want it to be offline and the user can choose to save their db file in their dropbox/gdrive folder.
> It was one of the best things that happened to Linux desktops in a long time and removing it hurts users and makes them less secure.
It would be more accurate to say it was the best thing to happen to your use of Linux in a long time, and it looks like even that is only because you're trying to use a bunch of closed source, non-cross-platform stuff.
I also disagree that it makes users less secure. The teams working on Debian, Ubuntu, Arch, etc. have much better security track records than some random web developers who've made an "app". There's no way would I trust a web based SSH client, for example.
And sometimes, you have no choice - there's no FOSS alternative to TeamViewer, and thanks to it running inside Chrome, I no longer have to run a Windows VM.
The web based SSH client is published by Google themselves and they use it internally.
> The teams working on Debian, Ubuntu, Arch, etc. have much better security track records than some random web developers who've made an "app".
The way things are, right now, Chrome is much better at protecting apps from each other than my Linux desktop is. If, for example, the Cleanflight or TeamViewer apps were regular apps, a bug in them would fully compromise my account.
---
Off topic remark about Linux distro security: I really like Arch, but security isn't their strongest suit. For example, they still haven't enabled full-system ASLR, citing unfounded performance concerns, when other distributions did so years ago. Even Windows with all their third party apps has a higher percentage of ASLR binaries than the average Arch system.
They also have no central build system and instead rely on volunteers who build the packages on their personal systems and sign them using their personal GPG keys.
I really want ASLR in Arch so I'll keep complaining about it publicly until it finally happens :-)
> The way things are, right now, Chrome is much better at protecting apps from each other than my Linux desktop is.
I have a hard time believing that. With a ton of stuff all running inside of Chrome, it's much easier for them to access each other's data than if they were standalone apps. Further, since Chrome is such a huge attack surface, I would expect it to be less secure than a smaller, more specific application.
On that note, I can go look at my Linux distro's security and bug tracking systems and see all of the known security issues and bugs affecting almost all of the software on my system. Does anything like that even exist for Chrome Apps?
> If, for example, the Cleanflight or TeamViewer apps were regular apps, a bug in them would fully compromise my account.
Isn't that the case whether it's a Chrome App or not? Chrome has a huge attack surface, so it seems there's an even bigger chance of hitting a bug or being affected by an exploit.
The bigger problem seems to be that you're running apps that you don't trust, while I can trust my Linux distro to have safe software in their repositories. Barring bugs, I generally don't have to worry about installing malicious applications.
I'm not sure Google does any kind of vetting for Chrome Apps, but I'm not sure I'd trust them even if they did. They are the largest ad tracking company in the world after all.
> I have a hard time believing that. With a ton of stuff
> all running inside of Chrome, it's much easier for them
> to access each other's data than if they were standalone
> apps.
Ah, the argument from incredulity.
If you're using X11, every command with access to the display server (which is usually everything you run) can read all keyboard and pointer input and screen output and inject arbitrary input.
And? That doesn't change by running inside of Chrome.
The only reason that's even a concern is because you can't trust Chrome Apps to not be malware.
On the other hand, when I "apt-get install <some app>" I know it's not listening to all X keystrokes unless that's a legitimate part of its functionality, because I trust the Debian team to only add trustworthy software to their repos.
>I have a hard time believing that. With a ton of stuff all running inside of Chrome, it's much easier for them to access each other's data than if they were standalone apps.
Chrome apps are subject to sandboxing, and regular native desktop apps (besides apps installed through OS X's app store) generally don't have any sandboxing enforced on them at all.
Your pique is noted, but as more anecdata I use Chrome Apps for Soundcloud and Mixcloud, where it's nice to have a Chrome window that won't collect tabs and have a recognizable icon. I have dozens of tabs open in each of several Chrome windows and it can be a pain to find the one I want. Insert complaint about not being able to switch to a tab from Chrome Task Manager.
I was specifically disputing the claims that Chrome Apps are "one of the best things that happened to Linux desktops in a long time" and that they're noticeably more secure.
Taking Soundcloud as an example, Clementine (and probably most media players) can stream it just fine. Having a Chrome App is nice, but it isn't providing anything that isn't already available. I'd even say the Chrome App is a step backwards, because with Clementine I don't need a separate app for every music service.
The idea being that I should install Clementine to listen to Soundcloud, instead of Chrome, which is already installed? This is the only thing I would use Clementine for, since it doesn't support multiple genres per track so it's out as a music library.
> instead of one Chrome instance, I'm now running five which use more than one GB of RAM each
Is that true? Executables and shared object files are supposed to share (code) memory.
So what big data structures does Chrome use that it can share between tabs (which are processes) and that it can't share between different instances of Chrome?
> It was one of the best things that happened to Linux desktops in a long time and removing it hurts users and makes them less secure.
I will disagree, you can install most of these from the official repository of your distribution, without the use of electron. They are also very secure if you run them as an unprivileged user.
Suppose your JS Chrome App is getting the plug yanked on it, what are your alternatives?
1.) Port it to Electron and keep nearly the same code base
2.) Rewrite the whole thing as a native app in such a language as C++ without the use of Electron
You can't possibly tell me that most developers won't choose #1 instead of #2 in a heartbeat (the switching costs are orders of magnitude more for #2, for one thing). Which is not a Good Thing.
And it's also very obvious that #2 isn't nearly as secure as #1, which runs in a sandbox and so does not have direct unchecked access to users' files like #2 does.
Yes, but the existence of ssh on chrome makes it much much easier to teach a windows user how to try out the linux command line. PuTTY is annoying as hell to help a new person get working and they might not have enough space for vagrant+virtualbox.
PuTTY's UI is an ass-backward mess (and that's mildly put) and the fact that you're used to put up with it or that there exists five Knuth's arrows worth of guides doesn't change that. In fact the latter is probably a testimony of it all. I've used it for years, know it by heart, still hate it, and am pretty well served by and versed in the unix terminal universe, TY. PuTTY has been useful for sure but that was by scarcity as there was basically no alternative on Windows for a decade or more.
"Ubuntu on Windows" as they're calling it now is free, enabled via the control panel, and provides a near perfect bash experience. I use it on the desktop I built for VR to SSH into my digital ocean droplets all the time. Super easy to use, just open powershell, run `bash`, then you can run `ssh` like normal. You have access to your windows files with /mnt/c and etc. for additional drives. The only issue is that Powershell doesn't support the full gamut of colors that Bash does, but that support is coming in the fall creators update and frankly it works fine with every Vim and zsh color scheme I've tested.
10/10 developing for the web on windows is finally tolerable
I don't know enough about ssh for Chrome and bash for Windows to get your point. Isn't bash for Windows (as part of WSL?) free? What is it that costs more than $5 using bash for windows but is free with ssh for Chrome? Genuinely interested, I'm on OSX mostly but I'm WSL curious.
I interpreted that as them not having access to a Windows machine and wanting to try out the experience for themselves before attempting to teach someone else.
I love PuTTY, but my best PuTTY is actually KiTTY, since it saves profiles in a local config file instead of the ominous Windows Registry. Much easier to move around :-)
If you run them as a different user from yourself, maybe, but who does that?
The idea that software is secure if it only runs on your own user account is stupid IMO. I'd rather that software had access to everything on my computer EXCEPT my personal files.
It's about restricting access:
One is protecting others;
The second is protection within your own realm.
Both are needed (Unix was just like: At least don't touch the data / system that other users have)
It's useful outside of Chrome OS if you have a security perimeter based on TLS with ACLs and auditing already in place and you want to use it for SSH as well:
I've got this crazy idea. Since "Linux Desktops" are generally running GNU under the hood for providing user land services, why don't we call those systems... I don't know... "GNU/Linux"? That way we can distinguish them from systems that use the Linux kernel, but have a completely different user land infrastructure.
How many of them actually intimately use the GNU userland as opposed to Xorg and whatever libc's installed? GNU's an increasingly irrelevant portion of unix and unixlike systems -- most of the actually important userland portions are python, ruby, the aforementioned Xorg, etc.
I actually don't think you are incorrect. GNU is not nearly as big a piece of the puzzle as it used to be. It's just that when most people say "Linux Desktop", the part where they say "Linux" usually means the part that GNU makes up. As far as I know, GNU libc is still by far and away the most popular libc installed on those kinds of systems.
So it was just kind of a snarky joke because the parent said that to be a "Linux Desktop" you had to be able to get ssh running (presumably they meant openssh). And while that's not GNU, GNU is what the vast majority of "Linux Desktops" will use to get you there -- so the implication really was that "Linux Desktop" == "GNU/Linux Desktop".
I thought it was funny, but probably I was being too obscure. Also, I should know better than to dive into politics for no good reason.
Yes, I know what it means and includes. Android, which is one of the biggest unixes right now, doesn't use GNU. iOS, which is another one of the biggest unixes right now, doesn't use GNU. Most embedded linuxes don't use GNU. So yes, for the parts of unix which are visible to most people, the gnu parts are not very relevant at all.
While this is true, there is still a unix-like userland typically, at least in the form of busybox or somesuch..
I think there is some value in denoting 'linux the kernel' from 'linux the unix-like system', especially in the face of those systems which mainly use 'linux the kernel' in a non unix-like way, such as here..
e.g.: the 'gnu parts' (e.g. unix-style userland) are hugely important for me in a workstation - I could not do work in a system that doesn't provide the 'gnu(unix) like' user interface. On a phone/consumer/browser device, this is not so much the case
A device using the linux kernel (or Mach kernel in iOS) doesn't make it a "Unix" or "Unix-like" system, despite that same kernel being used in other truly Unix-like systems. The user land (aka GNU in most Linux distros) is what makes it a Unix-like system. That doesn't mean GNU isn't relevant, it means what you considered a "Unix-like" system was overly broad.
This reply is a bit overly pedantic and I apologize, but you kept pushing so I wanted to clarify.
You can choose what kind of laptop you want. ChromeOS is one of the options, and security (+ trivial exchange-ability) is one of the selling points for using a Chromebook.
I tried it for a while, but I'm too used to the Mac to have made the switch more easily, so I moved back. But I know quite a few folks who use and love them. Opinions, as I'm sure you can guess, vary widely. It was surprisingly not-bad, even for a diehard mac user, and that was on a model from two years ago.
Google is a very large engineering organization, and (my opinion here, but one shared by others) recognizes that there's a lot of diversity in what engineers like for their workflow. There's obviously a set of standards for what you can choose from as far as laptops (since the company is buying them), but it's pretty broad.
Niels Provos himself is a Chromebook user (not sure if he needs to access production these days...) and he talks about locking down privileged access to Chromebooks with security keys:
Looks like Mozilla won this fight.
When Mozilla didn't accept PNaCl and Pepper API proposed by Google, Mozilla went down the ASM path which now led us to Web Assembly being the general way forward.
I suppose I should have asked "what does polyfill mean in this context?" Does that term really alias "run [it] on" or is there something more subtle about this term's meaning?
Web developers, bless their hearts, feel uncomfortable with the word "emulate", so they use "polyfill", or sometimes the more general "shim". To some degree, "polyfill" implies that the functionality being emulated/shimmed is functionality that is under some other circumstances is expected to be provided by the browser itself.
There's a high performance asm.js implementation of WASM (converting WASM to asm.js at runtime). ie if a browser didn't implement WASM, but had a JIT which executed asm.js at high speed, it could execute WASM at high speed
There isn't such a polyfill. There is currently no good tool to translate wasm into asm.js (or general JS) on the client. The best that exists is to run the wasm in a wasm interpreter, very slowly.
In theory a translator could be written into asm.js, but there would still be wasm code that won't run fast, such as 64-bit ints, unaligned loads and stores, bitcasts, and other operations.
Yes, but emscripten can compile to both (with or without using the new native LLVM WebAssembly backend). It's stretching the word polyfill, but emscripten users have pretty close to a turnkey solution for producing code that detects WebAssembly support and falls back to asm.js.
Project Mortar is different in the fact that it is PDFium and Adobe Flash being allowed via Pepper API — basically it's for sandboxing those current native plugins. This would not be a public web use case.
PNaCl vs WebAssembly was all about letting everybody run sandboxed native code in the browser without any extensions or prompts.
Native Client (the portable version known as PNaCl) was an open source project by google to achieve native performance with sandbox security.
Mozilla wanted a more open web which led to Web Assembly.
Yeah, AFAIK PDFium doesn't use PNaCI and Flash will only be around as long as absolutely necessary. The only reason they aren't using PDF.js is because the PDf spec is ~12K pages long and they don't want to throw engineer resources at it anymore.
> they don't want to throw engineer resources at it anymore.
"As I mentioned elsewhere, there haven't been any full time mozilla devs on PDF.js for quite awhile. I'm not sure I actually see the whole maintenance cost savings argument since PDF.js has practically cost Mozllia nothing the last few years. A lot of bug fixes have come from unpaid contributors in that time.
Initially, PDFium was pitched as a freebie if we added support for chromium's flash and then we'd also get improved PDF printing and form support. However, the amount of effort that has gone into supporting PDFium is already far beyond what it would have taken to improve PDF.js form support and help improve Firefox's printing (which would have benefited the web in general). Though, this is my very biased opinion as I was tech lead of PDF.js."[1]
And this has been borne out--Project Mortar was announced about 9 months ago, and if you look at the relevant bugs in bugzilla, they are still pretty far away from getting it into production. They could have used a fraction of those resources to get pdf.js at parity with pdfium.
I'm sad to see PDF.js go. The PDF spec is ~12K pages long, and most of those pages describe things I don't want my PDF viewer to support, like embedded Flash, in much the same way I don't want my browser to support Flash.
PDF.js is at a very comfortable nexus of compatibility and efficiency, and I kind of wish I could use it for non-web-PDFs as well (but not so much that I want to put it in an Electron wrapper).
PDF.js isn't "going" per se, but its lack of inclusion and development may certainly take wind out of its sails. It should still be possible as a PDF viewer in the browser (as an addon). And I think you can already use it (inside Firefox) to view non-web-PDFs.
I totally agree that we shouldn't be throwing random PDFs at that pile of C++ code, but I'm not sure there is much of an alternative.
I'm not personally familiar with those internals, but from my past life in the print industry it took decades for print controllers to reliably handle native PDFs. And it's still not a sure thing[0].
And it's not a stationary spec, it's in Adobe's best interest to keep throwing in new features so they can license new versions of their software. It's literally a rehash of the old school office suite document formats.
Google is willing to cover those development costs because they need it for Android, ChromeOS, and Google Docs. So let them pay for it. This is doubly true if (as I suspect) this becomes the de-facto FOSS PDF implementation.
How exactly did Mozilla win this fight? WebAssembly was inspired by Mozilla's asm.js and Google's PNaCl. Also, the team working on WebAssembly are from Mozilla, Microsoft, Google and Apple. The real winner here are users because we now have a standard among all major browsers.
Google developed NaCl and PNaCl and put the latter out on the Web with no spec --- just "pull this version of LLVM and ship it". Also, apps written in PNaCl used the Pepper API for all platform features, a giant pile of Chromium code also with no spec. All an absolute nightmare for anyone who cares about Web standards and browser bloat. These efforts required big Google teams working for many years ... efforts which are now going by the wayside.
Mozilla put together a small team and did asm.js to show that you could port C/C++ apps to the Web and get good performance while reusing the JS engine and all the existing Web platform APIs.
Now we have WebAssembly, which uses existing Web platform APIs and which browser vendors are implementing by reusing the guts of their JS engines. It's obvious who won.
In a way it doesn't matter "who won" because as you say Web developers are the ultimate winners. But it does underscore how much the Web continues to owe to Mozilla.
(It's also an illustration of how powerful companies can commit massive blunders and get away scot-free in the marketplace and in PR.)
There's no blunder: Google had a goal, threw out something, stimulated competition over the precise implementation, and now there is a universally (among browser vendors) accepted solution moving the web in the direction Google wanted.
Google wasted huge resources on an approach which it was obvious from the beginning would never lead to a Web standard. Mozilla people, including me, told Google people even before PNaCl appeared that introducing the whole new non-standard Pepper API was unacceptable.
> Google wasted huge resources on an approach which it was obvious from the beginning would never lead to a Web standard
Google has been doing this for a long time, I doubt it's unintentional. If there is functionality that they want which is not standardized, they go ahead and implement it. When a workable standard is ready or detailed enough, they switch over to it. The earliest instance of this that I can recall was Google Gears, which was deprecated by LocalStorage. There probably are earlier examples.
NaCl predates asm.js/Wasm by 2 years: it wasn't always an option
Here's a timeline:
2011-10-16: Native client released
2013-03-21: asm.js released
2013-06-25: Firefox 22 released with asm.js support
2013-07-17: Chrome 28 released with optimizations for asm.js
2015-06-17: WebAssembly released
2017-06-30: Google announces switch to Wasm.
When do you think was a Good time for Google to switch tracks?
They do that as well, it's not one or the other. Google had employees working on WebAssembly (along with Mozilla, Apple and Microsoft). Standards take time to be developed and finalized.
> Google had a lot of people working on NaCl and PNaCl for a long time before they started working on WebAssembly
That's probably because NaCl(2011) and PNaCl predate WebAssembly (announced 2015). Google was optimizing Chrome for asm.js as far back as Chrome 28 (July 2013) - less than 5 months after asm.js was announced.
Google makes money from the web. Massive amounts of money. Something like 93% of their revenue. If WebAssembly leads to the decline of mobile apps, the amount of resources that Google spent on PNaCl is so tiny compared to the amount they will make by continuing to own the world of online advertising.
> Just so I get your argument straight, you're saying that launching a proprietary competitor to the standard means you're helping the standard?
No, I'm saying that launching a nonstandard solution to a problem without a standard solution or where there are discontents with the standard that are not being addressed is the usual way new standards are motivated, whether the new standard is based on the nonstandard solution or developed in reaction to it.
Standards (and even moreso, standards that are actually implemented rather than being mere paper triumphs) somewhat backward-looking rather than out-of-the-blue.
A commitment to standards isn't about not implementing nonstandard things, it's about engaging in the standards process to help get to a robust standard and then implementing it (replacing nonstandard solutions, if any) when it is clear what the consensus standard will be.
(And I use "nonstandard" rather than "proprietary" because standard/nonstandard is a different axis than open/proprietary.)
> Because then we owe Microsoft and Apple a shitload of thanks.
Well, yes, a lot of current web standards were originally either nonstandard solutions from Microsoft or Apple or alternatives developed in response and motivated by such nonstandard solutions, so, sure, they've driven a lot of the progress. I think they've generally been less good about (at last, slower) participating in standardization of an alternative when their original solution isn't acceptable to other players, but they have definitely been change drivers.
Actually a commitment to Web standards does mean a commitment to not launching nonstandard extensions to the Web platform, usually. Many of Chrome's own Web standards people would tell you this. For PNaCl and some other features Google's official excuse was to designate them "not part of the Web platform" ... which doesn't really make sense from anyone's point of view other than Google's.
In this case, the desirability and feasibility of running C/C++ code on the Web was not something that needed to be demonstrated by enabling PNaCl for Web content. In fact, uptake of Web-PNaCl has been extremely low --- fortunately. If significant Web-PNaCl uptake had been a prerequisite for WebAssembly, then WebAssembly probably wouldn't have happened!
The problem was precisely the Pepper, a pile of API that Google thought they could force on everybody, not PNaCl per see. If the initial NaCl implementation just exposed the pre-existing Web API into the native code sandbox without adding anything extra, the story can be rather different and a version of PNaCl could be standardized.
That's true, although NaCl's design made that difficult because the native code sandbox had to be a different process so interacting with the page's DOM would have required IPC.
I am curious what made it necessary to run NaCl in a different process? I thought the main idea behind NaCl was to allow the same-process native sandboxes.
According to https://static.googleusercontent.com/media/research.google.c... NaCl does not sandbox loads, relying on address-space separation to ensure secret data is not leaked. Obviously this only works with a single sandboxed application per address space. (And even then you'd have to be pretty careful!)
This is only for NaCl on ARM or AMD64. The original NaCl for x86 uses the segment registers for isolation allowing to restrict both loads and stores only to the permitted addresses. That, as far as I understand, does allow to embed into 32-bit process without compromising secretes.
So as a speculation in an alternative world where Google has not developed Pepper, but bridged web api into x86 NaCl, the latter designs for x64 and ARM would restrict loads only from the allowed address space.
Sure, (P)NaCl could have been implemented differently in a way that allowed multiple sandboxed applications per process, and then DOM access would have been easier and maybe Pepper wouldn't have been necessary, though there would have been slightly higher overhead I guess.
Chrome doesn't run V8 in a different process from the page DOM. That's the difference: NaCl/PNaCl _does_ run in a different process from the page DOM, so interacting with the DOM gets complicated.
There's no denying Mozilla played a huge role, but PNaCl was an critical step away from "JS as the bytecode of the web". That approach simply couldn't compete with mobile platforms running on (more or less) native code.
At this point, I really loathe adopting any facet of web-browser technology: there are too many broken APIs in too many browsers to maintain on both sides of the system: the browser developers have an insane number of combinations of features that need to be useful, secured and made reliable and developers for browsers are always at some weird disadvantage where they can spend months or years maintaining an application for the browser to find it rots out from underneath them.
Should you even be slightly successful in the use of an API, you always have to worry about deprecation when someone is no longer interested in doing the maintenance any more.
I am sure there were more than a few game developers that are are livid today about this announcement.
These things will go in cycles and I expect there will be an native-application cycle coming soon from browser-api-fatigue.
If you used this at all, it was in a Chrome-only app; this is pretty far from mainstream.
A good rule of thumb seems to be to use what everyone else uses, and make sure you're always ready to ship another version. People expecting compiled apps to survive unchanged for years are out of luck.
"Used to be"? There are still places that do that. :-) It's especially prevalent in factory automation and process control. We're lucky in that many of those systems are actually so old that no one ever thought of connecting them to the Internet.
That's a controlled environment, though; Samsung almost certainly creates a customized build of Chromium for its TVs and can simply keep NaCl enabled if it wishes. They aren't beholden to the same policies that apply to the consumer Chrome releases.
The problem is that you eventually end up stuck with an ancient version of Chromium without security fixes and incompatible with websites that use newer browser features.
> The problem is that you eventually end up stuck with an ancient version of Chromium without security fixes and incompatible with websites that use newer browser features.
Samsung can't even keep their flagship phones up to date, their TVs are not going to be kept updated regardless of what Google does with Chrome...
Deprecated APIs is the cost we pay for innovation. If we're going to get new APIs then it's reasonable to expect that old, little used and superseded APIs will be retired, since the browser vendors don't have unlimited resources.
Of course, it's understandable for those who are hit by a deprecation to be annoyed, but overall I think the tradeoff is worth it. It's definitely better than have unchanging APIs and having browsers becoming less relevant over time.
There needs to a middle ground. While it is true bad APIs need to go once in a while, a lot of them never should have come in the first place, and if some work had been put into getting them right in the first place they would still work just find today. Not perfect mind you, but just a little effort would be enough that we wouldn't have to deprecate them.
I think the Web is moving in this direction, though I don't know how well.
Flash is a zombie but not dead. You can still get an old swf working by clicking a couple of things in browser settings. The thing with google is they deprecate wayyyy to fast without proper thought to the alternative. It's like their projects commit suicide than die naturally.
Yeah. Wouldn't it be great if they just created a simple VM with well specified bytecode so that we could push the complexity into our tooling and just compile into the standard target?
Much harder than it sounds. Java, Adobe, and Google pumped a few hundred million dollars into previous attempts and they all failed. WASM learns from two decades of previous attempts and we are finally getting something sustainable[0].
Electron apps are not really relevant when talking about Java applets. Java applets embed in the browser, so compare Java applets with HTML5, CSS3, JavaScript and WebAssembly.
I much prefer the latter (HTML5 + CSS3 + JS + WebAsm).
IMO: No. It was more of an implementation issue. When Java applets were first implemented, the web platform was still extremely limited -- HTML was still at version 2.0, CSS and the DOM didn't exist yet, and Javascript was still very new. So it made some sense, at that point, to have Java applets exist in their own "world", and not interact with other web content.
A modern Java applet environment would probably look very different. More of a focus on providing a bridge to the browser's DOM, less of a focus on building UIs using native Java frameworks (like AWT or Swing).
It's definitely a challenge trying to balance portability with support for platform specific features but I'd be a little hesitant to draw general conclusions from the two big examples since both Adobe and Sun were notoriously negligent platform maintainers addicted to haring off after some shiny whim before fixing what they had already built. Even simple things like if Sun cared about non-server performance in the 90s or if either of them had figured out how to properly package and ship updated before the 2010s could have improved their reputation immensely.
It's reinventing the good part of the JVM (the JIT and the bytecode), better than the JVM (unsigned ints and value types are supported), without the bad parts of the JVM on the Web (the libraries, including the slow graphics stack, etc.)
What do unsigned ints bring to the table? I'd use them in C to avoid undefined behaviour, but that's not a problem on the JVM. Value types sure, I guess, but they don't seem super important.
Can't we just make DOM bindings for the JVM and reuse the huge existing library base?
Without unsigned types you have to use the next larger type and mask. It's ugly, a pain, and slower. You run into this problem a lot when dealing with binary data.
You're begging the question, assuming you have to represent a particular unsigned type to start with. What's the problem that you're trying to solve?
I appreciate that e.g. particular image formats are defined in terms of unsigned integers of particular sizes, but parsing binary formats seems like a specialized use case that's not worth distorting the whole language over (and it already involves fiddling with endianness, so you can't directly use the "standard" version of a given-sized integer when parsing).
> parsing binary formats seems like a specialized use case that's not worth distorting the whole language over (and it already involves fiddling with endianness, so you can't directly use the "standard" version of a given-sized integer when parsing).
You call it "distorting the language", I call it "exposing the capabilities of every single CPU manufactured in the last 15 years".
> You're begging the question, assuming you have to represent a particular unsigned type to start with. What's the problem that you're trying to solve?
1. Compiling existing C++ codebases.
2. Multiplication and division are not identical for unsigned and signed values. Say I have a memory address on 32-bit (note that this applies equally well to 64-bit) and I'm doing pointer arithmetic. I'd better not be doing a signed multiply, or else values above 2GB will be corrupted!
> You call it "distorting the language", I call it "exposing the capabilities of every single CPU manufactured in the last 15 years".
There are plenty of CPU capabilities that languages don't exploit fully (e.g. most languages won't give you direct access to the carry flag) or only allow you to access via a specialized interface rather than making them a first-class part of the language (e.g. SIMD instructions). This seems entirely normal.
> Say I have a memory address on 32-bit (note that this applies equally well to 64-bit) and I'm doing pointer arithmetic. I'd better not be doing a signed multiply, or else values above 2GB will be corrupted!
I don't really see how this works (pointer arithmetic, not that one should be doing it at all, would generally involve adding or subtracting small offsets to pointers, I can't see any use case where you would want to multiply a pointer?), but in any case it doesn't apply equally well to 64-bit in any practical sense? If you happen to need an amount of memory between 2GB and 4GB or between 8EB and 16EB then maybe using unsigned pointers lets you use a smaller pointer, but that seems like a pretty narrow use case.
> There are plenty of CPU capabilities that languages don't exploit fully (e.g. most languages won't give you direct access to the carry flag)
Sure they do: the carry flag is the > operator.
> or only allow you to access via a specialized interface rather than making them a first-class part of the language (e.g. SIMD instructions). This seems entirely normal.
So if you want to argue that unsigned multiplication/division should be encoded as a magic intrinsic (as Java 8 does) instead of as a primitive operation, then I still think it's a bit silly to make such common operations require such verbose encodings, but I could be convinced either way.
> pointer arithmetic, not that one should be doing it at all
The point of wasm is to be a compilation target for languages like C and C++. So pointer arithmetic is very important here.
I don't think this is a "begging the question" fallacy. General purpose programming language often need to deal with binary data. This can be anything from documents, compressed files, images, audio, video, network data, a video framebuffer, and so on. Dealing with binary is in no way a "specialized use case" of programming; I'm honestly a little incredulous at your insinuation.
Further, this "distortion" amounts to adding 4 types and defining bitwise behavior, and since unsigned shifts are easier than signed shifts, no problem. I wouldn't call that a distortion, especially not compared to things like for-in and lambdas.
But really, can you explain your disdain for unsigned numbers? I've never heard someone argue for their exclusion from Java -- I thought we all accepted it was a mistake.
I view extra primitives as very expensive because they add a whole bunch of extra cases in the language itself; I like small languages where as much as possible can be moved into libraries. I just don't see the use cases for unsigned ints as being widespread enough to justify having them in the language; I've worked across a number of industries and I think I've seen them used once (when fixing a bug in ffmpeg), whereas for-in and lambda are used absolutely everywhere. I'm not especially anti-unsigned; if I was designing a JVM-like bytecode I'd remove short, and perhaps even (single-precision) float and int as well.
What's not to like? Why couldn't we just create a system compatible with Java bytecode, and just replace the APIs over which it interacts withthe system (as Android has done)?
That'd make it even possible to reuse old applets with just a tiny shim around them.
And it'd keep compatibility with a huge environment of libraries and languages.
Were you around in the 90s when people actually did try to use the JVM for this?
Do you remember the part where everyone started using JavaScript instead because of all the deficiencies of Java Applets? (Slow to load, resource intensive, no interaction with the surrounding page)
Probably the biggest cause of security bugs in Java applets is the fact that the applet execution environment is a full JRE, with all the APIs involved in desktop JRE instances available, with "insecure" ones being locked down on an as-needed basis. So if a developer forgot to insert a security check in the JVM, the applet becomes able to subvert its sandbox. The browser VMs don't have these APIs available in the first place, so getting undesirable access to things like the filesystem is much more difficult than "spot the API with the missing check."
Have you been doing this for any amount of time? Because today browser compatibility is just excellent compared by historical standards. The only differences are usually that some new feature doesn't appear in all browser engines at the same time.
Yeah, we were looking at browser stats for our website today and during the last 30 days 20% of our distinct users were using IE 6, 7, 8, 9, 10, or 11. Of course most were on IE 11, but when you still have a significant number of people still using ancient versions of IE, I despair at ever being able to use modern web technologies.
The only way to move forward is to ignore them. Put up a message that advises to install Chrome or Firefox and get rid of all the cruft that supports browsers older than 5 years. Most probably already have a modern browser installed but use IE out of habit. I see it at large corporations all the time. I even heard "I wish it would remind me to just use Chrome"
Unfortunately that isn't an option for us. We are mandated to support everyone. And those people on IE are retail sales associates using our site on hardware that simply cannot be upgraded. It must be replaced to get a newer browser. But the replacement cycle is measured in decades.
I agree, but simply telling user to download a better browser could be a pretty condescending attitude. Falling back to a css-less html 4 would be more responsible for everyone.
I don't quite get your numbers, because 20% * IE6, 7, 8, 9, 10, 11 = 20% * 6 > 100%.
Or, alternatively, the 20% for all of them combined don't prove your point that "people are using ancient versions", because, as you point out yourself, "most were on IE 11".
Out of all browsers, including on mobile devices -- which skews the results pretty significantly.
If you only consider desktop browsers, MSIE comes out between 9-10%, including about 7% on MSIE11. That's not counting Edge, which is another 3% or so.
The most material result of this is that Chromebooks won't have a working SSH client starting sometime next year, because WebAssembly can't do real sockets without an external proxy.
Yes, because non-portable NaCl has higher usage inside Chrome Apps and Extensions, but I don't for a minute believe that Google is going to continue investing eng resources in maintaining (non-portable) NaCl now that pNaCl is dead, so I'd expect that to get deprecated sometime in 2019 or 2020.
> Yes, because non-portable NaCl has higher usage inside Chrome Apps and Extensions, but I don't for a minute believe that Google is going to continue investing eng resources in maintaining (non-portable) NaCl now that pNaCl is dead, so I'd expect that to get deprecated sometime in 2019 or 2020.
Sure, it is deprecated and will be deactivated eventually (but maybe not soon; consider WebSQL). Google's got every incentive to make sure that WebAssembly (plus the APIs exposed in ChromeOS) provides a complete replacement for PNaCl and apps have had time to transition before doing that.
That's largely orthogonal to this switch. WebAssembly can't actually do anything except for compute and access whatever its embedder exposes. As others have mentioned JavaScript ChromeOS apps have this API available, so WebAssembly running in that context will be able to access it.
Yes, the native android apps are a good solution - they perform far better.
One caveat however. The meta key is easier to map in the chrome app - for those using emacs for instance. 'External keyboard pro' may helps on android, but the setup is more complex.
It seems like the browser vendors are doing a good job coordinating to provide a robust ecosystem around Web Assembly. Clearly much healthier for the future of the web than fragmented browser-specific solutions.
I'm so confused why Google deprecated Chrome Apps. You'd think they'd take advantage of the abundance of Electron apps by extending the capabilities of Chrome Apps to grow their Chrome ecosystem and attract new developers.
As a web developer working primarily in JS, what should I be learning now to stay relevant/up-to-date once WebAssembly is more common? Are we going to see more web stuff built with c++, like the dsp example in this blog post?
I don't think WebAssembly will become more common than JavaScript anytime soon. And even when it will be, it means the tooling will have become so good that you won't even have to think about WebAssembly. That will be left to the people who create compilers to WebAssembly.
Hopefully you'll just use whatever language your org/team/etc. uses, and have it compiled to JS or WebAssembly as is most appropriate for said language.
To answer your other question, I most certainly hope we won't see "more Web stuff" built with C++. C++ is a terrible language for that. It's a very good language to do stuff that needs predictable high performance such as games. And games is a good market for C++ in the browser through WebAssembly. But other than that I hope other, more high level languages will pick up.
I guess the first ones jumping on the WebAssembly wagon will be Web developers, so we will probably get WASM modules written in languages they see appropriate, when JS doesn't cut it anymore.
I guess it will be Rust or Go.
Rust because of Cargo (for npm users a big +) and Mozilla (good marketing of Rust).
Go because of Google (also a Web company with good marketing) and because I read some Node.js developers already switched to Go before WASM.
A lot of current experiments seem to be the other way around. People with C/C++ projects porting them to WebAssembly. This is partly due to the initial limitations in the current version though.
I'm not a web developer but this has definitely peaked my interest to the point I've been dabbling with WebAssembly. It's really easy to get going so I could see a lot of people with other than WebDev backgrounds getting into application development in the browser.
Incredibly interested in the answer to that question as well. Go will not have a future in regards to web development if they don't find a solution for the GC issue in wasm. I assume (and hope) that's a bigger topic compiler-team internally as well currently.
I think if the web assembly experiment succeeds it will spawn new languages that become wildly popular. The web is huge, and client side programming is just a different animal.
You shouldn't have to deal with C++ directly unless you really want to, it's more likely that there will be 'precompiled' WebAssembly modules of existing C/C++ libs which solve computation-heavy tasks (like physics engines, image manipulation, 3d rendering frameworks etc)... and which would offer a Javascript API. The workflow for JS devs would be the same as using a minified Javascript framework now, but instead of a minimifed JS blob you'll load a WebAssembly blob.
Without finalizers in JavaScript, WASM Libraries don't work out well, as you need to do manual memory management in your JavaScript code then, as you won't be able to hook the native destructors into the Garbage Collector of JavaScript. So until those are a thing, it'll be extremely ugly to work with WASM Libraries in JavaScript.
I think emscripten does this by keeping dictionaries on the Javascript side which map numeric "ids" (used on the asm.js/wasm side) to JS objects. This is how mapping a GLuint 'texture id' to WebGL objects works for instance. The JS object won't be garbage collected as long as it 'pinned' in the lookup dictionary, but as soon as the asm.js/wasm side manually release the handle, the JS object will be GC'd. Future APIs (like the a WebGL successor) will hopefully provide 'garbage free' APIs so that they don't require keeping such helper structures around.
It seems too soon to worry about it, unless you like to work on compilers in which case have fun! If you don't, you have to guess what compile-to-WebAssembly language will become popular. So maybe just learn new languages if you enjoy that?
If a popular web library uses it for something real, they will probably provide a JavaScript API to call into it.
To your second question, WebAssembly shouldn't be affecting your day-to-day standard frontend development, unless you're developing very performant applications like games, simulations, etc.
For resources, MDN has a great introduction. Pretty up-to-date too. [1].
Right now, that's probably the best way to go if you're keen to build non-trivial wasm stuff, but I don't think it's a foregone conclusion that it'll stay that way. There's a Rust implementation now, for instance. And some talk of better support for garbage collected languages.
You can program in wasm's text format (the actual assembly language) directly, if you want, and you can program in anything that compiles to wasm. C/C++ have the most developed toolchains for higher (than wasm itself) level languages, for now, but that should open up.
If you care about security, performance or having portable offline applications its a sad day. Google just pressed reset and erased your work.
The way Google has abrubtly sent PNaCL to the knackery, rather than a gradual transition is just ugly. By removing the PNaCL functionality completely from builds they have broken faith with the development community. They didn't even commit to allowing PNaCL to run in deprecated mode via a feature flag or something.
In hindsight, I can see that Google had shifted their effort away from PNaCL several years ago. But to kill it they way they have it just brutal.
I would agree that for general web development this seems a way out of the javascript purgatory. But there is a long way to go before it can match what PNaCL did. And I find it bizarre that they created a 32 bit implementation; I know they had reasons but they seem shortsighted.
AFAIK the main missing feature in WebAssembly that PNaCl had is shared-memory multithreading and that is implemented and waiting on spec finalization to ship. What else is there?
Security (code signing, timing determinism, large API attack surface, complex javascript interpreter/jit),
performance (obvious deficit between an IR bytecode and jit, static analysis vs runtime checks),
concurrency (how many cores on your new 2017 box? By 2027?), features (the p/nacl api let you do more, like access devices for video/graphics/audio; instead it will be an implementation specific thing; networking - open sockets to unrelated hosts; same origin is a straight jacket),
elegance (wasm/asm.js/js is just hack, hack, hack; thanks Mozilla),
maturity (just bugs need time to be sorted).
I don't think PNaCl guarantees timing determinism.
Not sure what you mean by "code signing" in this context.
The API attack surface is a non-issue; it's already accessible by any page that serves a PNaCl object.
There's a small performance deficit but it's getting smaller. AOT compilation and caching of compilation results is possible; asm.js did it.
I already mentioned shared-memory parallelism (it's just about done); did you read what I wrote?
Standard cross-browser Web APIs give you access to those devices already.
Same-origin is a strait-jacket --- a necessary one. "Break the Web's security model" isn't a desirable feature.
"Elegance" is just a point of view. From another point of view, "duplicate everything in the Web platform" is not elegant.
As for maturity and bugs --- having multiple interoperating implementations and real specifications is a good way to drive competition on quality ... and to distinguish "this is a bug, fix it" from "hello de-facto standard!"
It's sad. PNACL is more efficient.
Go to lichess.org/analysis, make a few moves, turn on the engine analysis.
With firefox and WASM, my machine compute 300 knode/s.
With chrome and PNACL, my machine compute 2000 knode/s.
That's a big step backward.
I see nothing inherently more efficient about PNaCl. Both it and WASM expose an architecture-agnostic 32-bit virtual instruction set, then rely on the user agent to apply good optimization. WASM’s performance will improve.
Non-portable NaCl did have the advantage that it could specifically target and optimize for one CPU architecture, and it had the beginnings of 64-bit address space support. These were nice, but not widely used. And we both could conceivably make it into WASM in the future.
Q. Why not NaCl or PNaCl instead? Are you just being
stubborn about JavaScript?
A. The principal benefit of asm.js over whole new
technologies like NaCl and PNaCl is that it works today
asm.js however wasn't good enough to actually be useful, as evidenced by the lack of adoption and the move to wasm. So now we have wasm which is not backward compatible. We would be further along now if Mozilla/Eich would have gotten behind Google's more mature effort, this really was stubbornness IMO.
Actually, the AOT optimization path for asm.js has been supported by Internet Explorer/Edge for quite some time. Chrome also supports AOT asm.js path now through their wasm implementation[1]. Before that Chrome and Safari did optimizations that just happened to make asm.js faster ;)
It's likely that if WebAssembly didn't happen everyone would've implemented AOT optimizations for asm.js and all the new features (shared memory threads, simd, 64bit int etc.) would be targeting that instead.
PNaCl had two big problems Google never attempted to solve:
* No spec, just import some particular version of LLVM and it does what it does
* Dependence on Pepper for its platform API --- just a big pile of Chromium code that does what it does and duplicates all the standard Web APIs
I don't have any insider knowledge, but my sense is that Google never tried to solve these sorts of issues because Mozilla made it very clear that they wouldn't support it. Now they're backtracking and building a non-js asm system too. Do you think wasm apps will always use slow JavaScript ffi to call opengl say? We'll get a new pepper in like another decade.
I personally told Google people that the Pepper platform API situation was unacceptable to Mozilla years before PNaCl even appeared. That didn't stop them pushing ahead. So if your theory is correct then Mozilla opposition was not strong enough to stop PNaCl development, and not strong enough to stop them exposing it to the public Web, but strong enough to stop them trying to turn it into a proper Web standard.
Wasm is only superficially non-JS. AFAIK every browser vendor is implementing Wasm by reusing the optimizing compilers in their JS engines.
I think we might see some extensions to WebIDL and Web platform APIs to improve Wasm app performance. We're sort of already seeing that with [AllowShared]. But there's no need to develop entirely new platform APIs, because fundamentally there's no reason calls from Wasm through JS API glue to the browser should be slow; JS API glue can be inlined into the Wasm code, for example.
By the time Google disables PNaCl on Chrome Apps, this kind of thing won't be an issue. There is a reason why the announced 2018 deactivation excluded Chrome Apps and Extensions.
> I haven't seen any real activity from browser makers to add raw UDP APIs.
Well, for the Chrome Apps use case for Chrome OS, chrome.sockets.udp seems to be available, and wasm can call out to JS APIs. What other browser vendors do isn't really relevant for Chrome Apps.
This is a sad day. NaCL was excellent tech, and its a shame it didn't get incorporated into LLVM proper, and that it didn't take off. We should be using NaCL/pNaCL for all apps everywhere, and for components within apps... etc.
ZeroVM was a desktop/server sandboxing environment that just didn't get any attention and mindshare. Shame!
I want a ZeroVM-like system that makes all of the Debian user-space available on any other OS, each app in a little sandbox... It ought just be another compiler target and automated.
But that wouldn't be portable to other architectures though. Webasm is supposed to run on anything a modern browser can run on. I think it would also be much harder to guarantee that's secure.
Have you seen the "the birth and death of javascript"? More interestingly it could vastly reduce the cost of interprocess communication to the point where the idea of a process starts to look a bit weird.
Want to query a database? Upload a function you wrote yourself for complicated queries like GIS, or let the SQL parser generate the raw wasm on your behalf.
Want to authenticate your users, but don't want to deal with all the different permiations on hardare tokens, ssh keys, etc? Just let them upload a wasm function.
It seems like a lot of people were simply unwilling to support java. Bundling the execution enviroment with such an un-fun language was probably a mistake. Not to mention the patents and oracle in general.
It was before my time, but I don't imagine the open source community was ever going to really embace it.
So if I understand correct, they didn't like LLVM because it wasn't really designed for this purpose. Webasm has some advantages over it like being designed from the ground up to be a secure sandbox and fit in with existing javascript JITs. LLVM also isn't a fixed open standard and doesn't guarantee backwards compatibility (I think.) You can also compile LLVM code to webasm even today, so it isn't that much of an issue.
But this is just my impression to listening to some talks by the developers of webasm.
I work on off-tree LLVM and have been following NaCL from the beginning. I think it's mostly nobody wanted to adopt googles solution or even admit that browsers needed sandboxing. Hell, Firefox only recently got serious about putting different sites in different processes. It took them years to accept they needed a pNaCL, and by then they couldn't really stomach using something already tried and tested by political opponents. Browsers and LLVM are both too political.
Supporting pNaCL and whether or not the browser's own code is sandboxed seem like very different issues; I'm not sure why you're conflating them.
>Hell, Firefox only recently got serious about putting different sites in different processes.
A multi-process architecture is a huge thing to try to bolt on later, and Firefox was also held back because many of its add-on APIs were incompatible with a multi-process architecture. They are finally making progress at least.
I'm not familiar with what goes in a browser, but for some reason, browsers seem to eat up a lot of resources.
While I am happy that it looks like this will (more or less) be standardized across browsers, I still hope for the day where running a more minimal browser (text, image, maybe videos) will become viable. Of course, I'm pessimistic on this, seeing as so many sites are probably not functional without javascript and other related technologies, but maybe some web developers care about choice. Who knows.
I do this, or at least, I'm trying my hardest to. I use terminal interfaces to my usual sites like Reddit, Gmail, GitHub, Google, etc. A mouse button or keystroke opens links using a little script I've written which basically opens media files in feh/mpv/whatever, html sites using [Mercury](https://mercury.postlight.com/web-parser/) and links, and if I really need to open the page in Chrome for some reason, the open script will begin nagging me to find a terminal client for that service if I use it very often.
We used PNaCl to port some games to a Smart TV. It was a painful experience. Debugging chrome from a custom gdb, breaking changes from release to release, and some very subtle bugs in pepper API. But the final results were surprisingly not so awful.
Why bother investing time and resources in Google techs when they keep on discontinuing them? they don't care about enterprise software, if they did, they wouldn't act like that. AMP? lol, think again before implementing this, it will not be worth the effort since Chrome will eventually drop that too.
A good reminder not to invest in any Google specific technology.
> We recognize that technology migrations can be challenging.
The suggestion I got was that "HTML5 Video" was the subject. Which still doesn't make sense; the alternatives which it replaced (Flash, QuickTime, WMV, Java-based video players...) were all radically worse, and much more abuseable.
HTML5 Video is just an example of shitty browser behavior, as for some reason, every web browser maker seems to think that autoplay videos are great. They're not because sites are abusing it.
Worst, some sites these days will waiting until you've scrolled down a bit before they start playing a video, leaving the user to locate it and stop it.
As for Flash, it was indeed bad, but at least I had the option of not installing it.
> every web browser maker seems to think that autoplay videos are great
Huh? This has nothing to do with browsers; it's purely a site author decision. Autoplaying videos is a site author decision -- and one that they already frequently made before HTML5 video was widely available.
Not really. Browsers just need to default stop autoplaying videos out of the screen view. Or better yet what they did to popup blockers. Give one chance to behave well, if the site doesn't then it gets blocked and the user has a notification that autoplay was disabled.
Blocking scripted calls to "element.play()" really sucks for Web developers building games and other kinds of apps. Safari on iOS did this and it was horrible for developers.
Firefox and Chrome let you mute tabs. Firefox Nightly also blocks playback in any tab you haven't actually looked at yet. Other than that there's not much they can do.
Not the OP, but with HTML5 video in modern browsers, you can't simply escape obnoxious auto-play videos by disabling/never installing RealPlayer/Shockwave/Flash/etc.
No, but unlike when you use plugins the browser can't affect (like Flash) with HTML5 video it should be possible to use standard JavaScript to either write a user-script, a browser extension, or even build your own browser in a way that it would find all HTML5 videos and prevent them from autoplaying.
Because things are built to standards now, we can build those tools to manipulate them the way YOU want (which includes making them not auto-play) so the move from Flash and other 'black box' plugins to HTML video has been a HUGE win for usability!
ECMA Script trackers/ads are blobs of code that can sometimes be identified based on the requests they make, WASM trackers/ads are blobs of code that can sometimes be identified based on the requests they make.
The only real difference is that the WASM variant will probably run faster than obfuscated ECMA Script.
I agree. It seems like most of the extra horsepower we have these days is used for tracking on behalf of a company instead of features that are useful for the end user.
It's a format designed to be a target for static languages that do manual memory management (like C/C++) instead of JavaScript that some engines know how to recognize and optimize. The binaries are smaller and much faster to parse and the sandbox model results in much less overhead.
For a concrete example, at least two JavaScript engines (those of Chrome and Firefox) need to emit almost no range checks on regular memory accesses with WebAssembly on 64-bit computers. (For those interested in going past ELI5, they do this by allocating a large chunk of virtual address space, such that any out of range pointers will land somewhere in that space (WebAssembly currently only has 32-bit pointers). Some of the space is filled with memory, and the rest causes a controlled signal/exception so the engine can safely terminate the instance.)
target for static languages that do manual
memory management
So I will have to do all the garbage collection myself? Isn't that a huge burdon on the programmer? And will this lead to lots of apps with memory leaks that needlessly eat up my resources?
Plenty of people were left scratching their heads when the new Google Earth was introduced, precisely because it was clear that PNaCl's days were numbered.
I think the likely explanation is that Google is a large company with numerous departments and those departments don't always pull in the same direction. I imagine some people within Google weren't happy with the Google Earth announcement.
The code base is almost certainly in C or C++. The parts that use the APIs will need to be changed, but it won't need anywhere close to a full rewrite.
Webasm doesn't currently have access to SIMD instructions or the gpu. So that kind of intensive matrix math stuff might not benefit too much from webasm.
To some extent yes, but not nearly as bad. One feature Wasm has (shared with asm.js, but NOT PNaCl) is that the stack of return addresses lives outside addressable memory, and therefore can't be corrupted by an arbitrary-write primitive, so no ROP attacks. Likewise Wasm memory is entirely non-executable so you can't return or jump to exploit code. Function pointers are quite restricted; you can only make an indirect call to a function that's explicitly listed as indirectly-callable.
Wasm applications can have type confusion bugs, use-after-free bugs, and array overflow bugs, etc, but this is inevitable for any realistic C/C++ compilation target.
Some examples of specialized apps I use all the time that would require a native app otherwise:
- Signal Desktop
- TeamViewer
- Postman
- SSH client
- Cleanflight drone configuration tool
It was one of the best things that happened to Linux desktops in a long time and removing it hurts users and makes them less secure.
Now everyone is moving to Electron and instead of one Chrome instance, I'm now running five which use more than one GB of RAM each. Much less secure, too, since each has its own auto-updater or repository and instead of being sandboxed by Chrome's sandbox, they're all running with full permissions.
It also means I cannot longer use Signal Desktop on my work device since installing native apps is forbidden for good reasons, while Chrome Apps are okay.
It also hurts Chrome OS users since Chrome Apps are being abandoned in favor of Electron. It also makes it less useful for developers to create Chrome Apps since the market is much smaller.
Since Chrome Apps continue to be available on Chrome OS, I'm considering separating that functionality into a stand-alone runtime or making a custom build for Linux. Anyone wants to help with that?