Hacker News new | past | comments | ask | show | jobs | submit login
Introduction to WebAssembly: why should we care? (tomassetti.me)
296 points by ingve on Dec 19, 2017 | hide | past | favorite | 248 comments



I'm worried that this will make the JavaScript trap[1] even more of a problem. The default operation of the web is to allow remote sites to push non-free applications to your browser that it then proceeds to execute.

As people have been shifting towards running everything in the browser (just like people like me run everything in Emacs), this effectively results in a revival of ubiquitous proprietary software.

I don't think this is desirable.

Granted, we already have this problem with plain minified JavaScript, but as we shift more towards treating the web as a platform for which to compile software, this problem is going to become worse.

I'd love for the web to be a set of stable application interfaces that in-browser applications can talk to. Give me the sources and I'll use my package manager to build the JavaScript applications I want my browser to execute for particular online services. I want more control over the increasingly complex software that we're expected to just execute blindly upon reception.

[1]: https://www.gnu.org/philosophy/javascript-trap.html


I think the idea that "access to source code == freedom" is out of date. Imagine someone gave you a thumbdrive with the complete source code for all of Facebook's products, build system, infrastructure, everything that's checked into their repositories. They also gave you legal carte blanche to do as you feel with it.

Would you be free of Facebook? Would you be able to do anything useful with at all?

No. The problem is that Facebook still has all of the data and that's where the freedom and control live today. The code is just an interface for it.

Online multiplayer game developers figured this out decades ago. The primary way to prevent people from cheating is not by preventing them from hacking or cracking the client application code. It's keeping the game state — the data — only on the server.


Yes that's a different problem different than software freedom:

https://www.gnu.org/philosophy/who-does-that-server-really-s...


> Would you be free of Facebook?

Are you kidding?

The source code for the full toolset would reveal how they analyze their data, what they find relevant (even merely based on which types of tools exist and which don't), how access is defined for various categories of employees to access that data, differences between what they claim about those capabilities and the actual infrastructure, problems they've had which they've attempted to fix in changes to the code, how they set prices to sell access to advertisers, how their frameworks to respond to lawful requests for user data from government agencies work (hell, even inferences about which government agencies they respond to), whether or not they've made serious attempts to curb the fake news problem, a full understanding of the ways in which they track people who don't have a facebook account across the web, and probably a whole host of other enlightening details about their entire operation.

I'm sure there are people who can make quite accurate guesses about how all of that probably works without looking directly at the code. But if the Snowden leaks tell us anything it's that a handful of security specialists being able to deduce something is completely different that a critical mass of programmers being unable to deny that a problem exists. Access to that code would certainly give developers of FLOSS privacy/security software a better idea of how to protect their users' privacy.


>Online multiplayer game developers figured this out decades ago. The primary way to prevent people from cheating is not by preventing them from hacking or cracking the client application code. It's keeping the game state — the data — only on the server.

At least, good online multiplayer game developers.


It's not sufficient. It may or may not be necessary. But it is useful.


Proprietary software doesn't need any revival, because it never went away.

The irony is that proprietary browser based software happens to run on top of FOSS libraries and languages, which most companies hardly contribute anything back.


> Proprietary software doesn't need any revival, because it never went away.

True, though we have reached a point where one could be really comfortable without proprietary software. I certainly am. It's just with an increasingly "appified" web that proprietary software makes again inroads on otherwise free systems.


> True, though we have reached a point where one could be really comfortable without proprietary software. I certainly am. It's just with an increasingly "appified" web that proprietary software makes again inroads on otherwise free systems.

Agreed. I extremely comfortable with my fully free system, and the only risk I have each day (which I've mitigated) for running non-free software is the constant barrage that others attempt to force upon me using my web browser---free browsers gladly download and execute non-free software by default.

I gave a talk about this and the issues of package management and code signing at LP2016: https://media.libreplanet.org/u/libreplanet/collection/resto...


So, viral GPL FTW?

Seriously, there's no shared libraries in compiled monolithic WebAssembly, is there? I think[1] that means included GPL code will require exposure of the source.

Maybe if we're lucky some best practice will emerge that the source is often available as a sourcemap so companies don't have to worry about being sued for infringing the GPL. It sure would be nice to finally see an outcome of companies erring on the side of (legal) caution that's better for consumers.

1: Think because everybody seems to like to argue the specifics of this point.


Shared objects are planned for WebAssembly.

In any case most companies are busy moving away from anything GPL related.

GCC just got kicked out of Android and all Linux alternatives for IoT are using MIT like licenses.


What? There's absolutely no need for such companies to view this as anything more dangerous than the past. Nothing in the GPL says you have to be provided with a copy of the source along with the program. It also doesn't say the program has to come with a link to an online repository for the code. The GPL says that the code has to be provided when requested. So go ahead, contact the company and request the code.

Always cracks me up how many people think the GPL means "I get everything I want, exactly how I want it".


> Nothing in the GPL says you have to be provided with a copy of the source along with the program.

What did I say that made you think that? I said the GPL requires exposure of the source, and maybe some best practices could emerge if we're lucky that might amount to it being easily available.

> Always cracks me up how many people think the GPL means "I get everything I want, exactly how I want it".

Interesting, as it always cracks me up when people decide to interpret statements in contorted, narrow ways just so they align with a pet peeve and they have something to rail against.


Plenty of companies use FOSS libraries,languages and tools outside the web - never to contribute anything back. And there is nothing wrong with that as long as the license is respected.

That said many companies do contribute back and in the case of the web quite a few big libraries and frameworks were released open by private companies.


I don't understand this fear of webasm. Anyone can run their javascript through minifiers, or compile C++ to asm.js right now. Webasm doesn't move the needle of obfuscation much at all. The binary format can be turned into the textual ast representation directly.

http://ast.run/


Same holds true for machine code: Anyone can run their C trough minifiers or compile to machine code right now. Machine code donesn't move needle of obfuscation much at all. The binary format can be turned into textual asm representation directly.

Currently, difference in performance between asm.js and WebAsm is about 5%. It doesn't looks like performance gain or portability are main reasons to select WebAsm over JS. IMHO (I'm developer myself), main reason is better obfuscation of the code.


> Anyone can run their C trough minifiers

I don't know of any programs that are distributed as C to be run directly like javascript.

> Machine code donesn't move needle of obfuscation much at all

There are two very fundamental mistakes you are making when equating webasm to raw processor instructions. The first is that webasm even its binary format is still organized in a abstract syntax tree, so its instruction are not flat, they are in a hierarchy.

The second is that webasm doesn't take care of the input and output. Right now it doesn't even interface with the document object model, so all the IO is still done through javascript.


> I don't know of any programs that are distributed as C to be run directly like javascript.

https://github.com/kripken/emscripten


That's not what emscripten does. It compiles C to javascript and the program is distributed as javascript, not C.


Is executable size not a good reason?

Also, I don't know how you think there's a noticeable difference of readability between minified asm.js and textual wasm for the average person. It might be more difficult for a programmer to read the wasm, but I can't imagine that "disassembled" asm.js is particularly readable in the first place (and both are completely incomprehensible to a non expert)


No, because size is about same after compression.

Average person see no difference at all. Moreover, I tested tank demo right now, and asm.js version is about 6% faster at loading (1.99s vs 2.12s from cache at Firefox 57.0/Fedora 26).



Asm.js is just JS. "No support" means just "No ahead of time compilation for JS in engine."

If you are trying to point that wasm is supported better on iOS, then look at performance tests: https://arewefastyet.com/#machine=29&view=breakdown&suite=as... . At many tests, Safari is faster than other browsers at asm.js microbenchmarks.


I still don’t understand the primary use case or benefit over JS. Looks a lot like a solution in search of a problem. Maybe it can be used for DRM?


This is hardly inherent to Javascript/WebAssembly. Installed applications are binary code, i.e. closed by default unless you explicitly provide source code. The same holds true for web applications, however, for some reason we don't care about it being open source that much anymore...


Admittedly, I'm looking at this situation from within my comfortable bubble of a GNU system in which I only use software for which I have a source-to-binary mapping through GNU Guix.

JavaScript applications are the exception in my environment and they threaten to burst that shiny bubble.

Aside: I really think that the terms "open" and "closed" are imprecise and confusing, which is why I avoid them. With deterministic builds and free software I have a mapping from source to corresponding binary; so while the binary may not be directly editable (or "closed") it still is free software and I could edit the corresponding source to obtain a new binary. In my opinion the problem really is one of practical software freedom, which has never been achieved in a satisfying way for web applications.


I guess I mostly agree. I just think that theoretically it should be able for web applications to be open source just as much as our GNU systems are, and that it's a shame that in practice, this is often not the case.


The javascript part of web applications are pretty fairly open source. Most libraries are OS licensed (probably MIT) and include plain source along with the minified versions that can be verified, forked, modified or redistributed as desired. Any browser allows the source of any page to be viewed or saved locally - and I would bet dollars to donuts that even with minifiers, most javascript running on the web is still plaintext. Browsers allow turning off javascript entirely, or overriding it with custom scripts using plugins like Greasemonkey, which is a lot more freedom than most native apps provide.

The only problem from a free software perspective is that not every line of javascript in the universe is explicitly linkable to a GPL license in a static file, as one would find in, say, a C library. But in every other way (I would argue, in every way that really matters) javascript in the browser is either as free or more free than other languages in their respective environments.


Don't forget that there's also a large server component that's usually closed source :)


The web throws a monkey wrench into a lot of assumptions the free software model seems to make about what software is - obviously you can't download the source of an entire server stack and compile it, much less rewrite and redistribute it, but that would seem to be what would be required for the web to be 'free.'

Ok, technically you could but that would be insane. Even if you consider how much FOSS exists on servers, it's impossible to ever be certain what is running the logic of a site unless you can run it locally, which defeats the whole point of the web.

But arguments that javascript in particular is hostile to user freedom seem a bit overstated... and even Stallman doesn't suggest the answer is getting rid of it altogether. Although he does seem to believe that only running free javascript somehow makes it safe to run... which isn't true.


> The web throws a monkey wrench into a lot of assumptions the free software model seems to make about what software is - obviously you can't download the source of an entire server stack and compile it, much less rewrite and redistribute it, but that would seem to be what would be required for the web to be 'free.'

This looks like a strawman.

For a user of a service there is no difference between a service provider that uses free software and a service provider that uses proprietary software. Proprietary software primarily harms the user of that software --- and in this case it is the service provider, not the user with a browser.

There are different concerns to outsouring computing and/or communication to services, but these are not the same concerns that apply with the use of proprietary software.

JavaScript applications, however, do run on the user's machine and thus the usual concerns about running non-free software apply.

> [Stallman] does seem to believe that only running free javascript somehow makes it safe to run... which isn't true.

I haven't heard him argue that position. Free software obviously can be faulty and it can be unsafe to use.


A lot of sites are transpired these days, so you have the 'source' but it's generated code


JS on the web wasn't open to begin with. Just because it was sometimes human-readable doesn't make it free, to put it how Stallman would.


This is another case where the "open/closed" image doesn't apply and only contributes confusion. It wasn't free to begin with --- though there are notable cases where that is in fact the case.

There are many JS libraries and full web applications that are in fact software libre. It just isn't really any freedom the users can exercise due to the lack of application interfaces.

If I were in the mood for an argument, I'd argue that with all the ugly SOAP and Java services users had more potential freedom than today.


I don't particularly disagree but this theme of ehrmagerd-they-took-err-open-software comments under WASM posts is beginning to look like a meme.

> Give me the sources and I'll use my package manager to build the JavaScript applications I want my browser to execute for particular online services.

The vast majority of people do not understand or want that. It's mostly a nerd fetish (and I count myself in).

> Give me the sources and I'll use my package manager to build the JavaScript applications I want my browser to execute for particular online services.

This openness is also a double edged sword. People focus on the warm fuzzy side of it and forget the mess that it tends to create for a platform because no one can ever rely on some features being available for everyone.

You will need a never ending amount of feature detection, polyfills, polyfills for your polyfills, transpiling, blah blah blah because oh look someone wanted to exercise their freedom to disable textboxes on their browser. And now they are demanding that you make your application gracefully fallback to handle the case where a textbox is not available. IT'S THEIR RIGHT!

Eventually someone will show up with a baked sweet potato and demand that your application must work on their potato because they have disabled all features but still want the functionality.

You can't realistically cherry pick underlying software pieces like that.

Rather than delivering a buggy software that works under a million different combinations of features and settings I will deliver a package of software that run predictably well on one or more predictable platforms. That is "the atom". You either take it all, or not at all.

From my experience the average users tend to be perfectly happy and satisfied with that. They are almost always running with default options everywhere anyway. It's the GNU-enthusiast hacker-types (and I count myself in) that tend to throw a tantrum about their custom preferences and philosophical and technical objections.


I'm worried about the intersection with EME and DRM.

No more open web. And the crap will become inextricably intertwined with the content of value.

Right now, it's "for content." There is going to be enormous commercial pressure to make it "for everything".

We should start reconsidering the term "user agent".


Use a code beautifier. JavaScript code isn't encrypted or compiled to binary. It is just text.


> It is just text

This misses the point. Looking at a binary through a hex editor is also just text --- and the binary is also "just code", albeit at a level at which only few people are comfortable to work.

Obfuscated code (and generated code in general) is clearly not the preferred form, so it hardly even counts as source code.

But let's pretend your point were valid: would it remain valid with WebASM?


FWIW, I agree with you, but

> But let's pretend your point were valid: would it remain valid with WebASM?

Yes (pretending GP's point were valid), because WASM has a text format[1], which is how "View Source" on the web is meant to be used for viewing WASM running in your browser. The text version can be derived from any WASM file (i.e. you can do the equivalent of beautifying JS to any arbitrary WASM).

[1] http://webassembly.org/docs/text-format/


Also, with the ever increasing level of software complexity many websites rely on, reading such text isn't as simple as it may seem, to give clear example, Google's default page in only one reference javascript file from apis.google.com has ~8000 lines of code, which is deliberately difficult to understand by itself: variables have weird names, lots of anonymous functions, and so on.


I know a lot of people long for the days when the web was just text and a few JPGs. I personally am waiting for the web to become a rich, ubiquitous, standardised application delivery platform that works on any device. I think web assembly adds to the richness while being fairly standardised, so I welcome it!


These things are cyclical:

* Early computers (1960-1980s): Dumb Terminal - Remote Server

* Early PCs (1980s-1995): Local Processing - Remote Storage

* WWW (1995-2010s): Dumb Terminal - Remote Server

* JS/ASM/Etc (2010s - near future): Local Processing - Remote Storage

Its fully possible that we will switch again to dumb terminal model. For instance, once the hassle of local code execution takes it toll, someone will have the bright idea of just putting the web browser itself in the cloud and just having a remote control connection to that browser....and the cycle will begin again.


Yet each of those steps solved a different problem.

Early computers did not have the local processing power needed to run heavy jobs.

Early PCs did not have the storage.

WWW solved an entirely different problem, namely distribution and communication.

JS et. al. solved the problem of responsiveness and interaction ie. latency.

I don't really see those as exhibiting cyclical traits, at best I see it as a correlation ie. side effects of the true problems being solved.


> JS et. al. solved the problem of responsiveness and interaction ie. latency.

You mean "introduced", right? Back before the modern web, when we all expected software to run on our machines, there was no responsiveness and latency problems, because data didn't travel over the wire unless it absolutely had to.


You can't download the entire internet. And loading a new web page every time you push a button is slow.


Not in practice, given that in reality of this use case, JS code also does network requests, which tend to be at least as slow as downloading a HTML page.

That said, what I meant is modern JS enabled moving into the web things that should have stayed local. It's literally two step backwards (moving software into "the cloud") and one step forward (giving back some responsiveness through AJAX).


For some reason the web feels a lot faster when I disable JS...


I can make an application as fast as you want if it doesn't have to run correctly.


Most of the time when you put JS on your page, what you're doing is breaking the thing that would - without JS - still correctly do what it's actually supposed to do.


"Most of the time"? Citation needed.

While I generally agree with the sentiment of "do more by coding less", and minimalist user interfaces, there are plenty of cases where you throw a whole lot of functionality out the window by outright banning JS.


> WWW solved an entirely different problem, namely distribution and communication.

> JS et. al. solved the problem of responsiveness and interaction ie. latency.

I don't think either of these statements is correct. There wasn't a problem of 'responsiveness' and 'interaction' on http to begin with, in fact what Sun and Netscape did with javascript was to overengineer the web because they saw it as a business opportunity to add client-side code. http was designed (badly) to exchange hypertext documents, not to perform transactions over the server, or render video-games, or play music and videos, and certainly not to transfer files, for the latter ftp was desgined (also badly).


While I accept that my description of JS et. al. might be contentious, I don't see that you have argued against my piece on the WWW in general.

Regarding latency and responsiveness I agree that they are issues that depend largely on your use cases and your skill at implementing solutions. In this regard I can agree that some pages simply don't need JS. It is also being misused for ads and tracking to a degree that is problematic and can in itself cause issues.


That's basically what Chromebooks already are. I think it's the future too. The only problem is that it moves away from the "upgrade your device every X years" paradigm that makes consumer electronics manufacturers money. They'll have to switch to a subscription model and people may not like that (and personally I'm terrified of the alternative situation where a company would rather I give them my personal information than pay them $1000 every 4 years)

One way to approach the latency problem is with more aggressive colocation. For example, an apartment complex could have its own AWS or Google servers for local computation or video streaming.


> One way to approach the latency problem is with more aggressive colocation. For example, an apartment complex could have its own AWS or Google servers for local computation or video streaming.

That would be great, if only we could make it not belong to Amazon or Google. Consider the same idea phrased like this: apartment complex have servers in their basement offering compute, and services you use work on those servers. The model of today is that companies own services, control where the compute happens, and ship your data to them, taking ownership over it in the process. The alternative model I dream of is your data under your control, your choice where the compute happens, and third-party code being shipped to that place.


Let's imagine we could solve the easy problem of writing the software that is fully distributed to each complex and interacts with the software in any other complex. How do we solve the operations problem of running the machines in the complex?


One model is to overprovision hardware a bit, and just not repair until it is significantly degraded (at which point you just swap it out and refurbish). Maintaining a software platform is getting easier.


VMs, containers, and whatever stuff Amazon is doing with that Lambda thing all sound like a good start.


> it moves away from the "upgrade your device every X years" paradigm

Don't Chromebooks have ~4yr end-of-support lifecycles where they stop receiving upgrades?


6.5 years for most devices, 5 years for some legacy cases, and it is a soft limit ie. they don't automatically enter EOL, they just stop guaranteeing the updates.

Source: https://support.google.com/chrome/a/answer/6220366


6.5 is pretty respectable. Not quite like Windows or general purpose Linux which can be upgraded almost indefinitely. I have Vista laptops released mid-2009 (>8yrs old) that run Windows 10 rather well considering their age. Of course, no one would have guaranteed 8yrs on those devices. I'd wonder what experiences people with EOL Chrome devices out there are having with upgrades.


I imagine you could still update with some wench-work. An EOL date of 6.5 sounds to me like a way to say that "beyond this point we will stop testing and ensuring compliance, and to avoid fucking over your relic (in PC terms), with an update that is untested on your specific hardware, we simply disable the automatic updates. Good luck."


Yeah, if we want latency added to literally every action... I understand that what you're saying is a possibility, but I sure hope it doesn't come to pass.


Those of us who get frustrated by latency in our tools were left behind long ago. Shiny user interfaces, cloud to-do lists, adblockers, bloated fad frameworks of the week, electron apps, blue apron for dogs... This is our reality now.

I miss the internet I grew up with. The one before all the money arrived to ruin absolutely everything.


New technology starts with the above-well-off, using tech you currency consider "absurdly unattaible to the masses" (like cars in the 20s, or telephone in the 40s, or computers in the 60s, or 24/7 internet in the 90s): for instance, when you just get a basically free gigabit connection straight into the backbone (like in a new upscale Japanese apartment building) the very idea that latency could even be a problem just... disappears.

In that context, just attaching an interactive window to a remote running process (as long as it can stream faster than whatever is your minimum acceptable framerate at your minimum acceptable resolution) is literally indistinguishable from running locally. So I frankly do hope that comes to pass (as long as I also get to keep being able to build a desktop computer for working offline).


Latency and bandwidth are different though. Your GBit connection won't help you access servers on the other side of the Atlantic.


I have no idea why you would need to leave your country in a world where we're back to thin clients and remote processing. Pretty sure the most obvious place for that processing to take place is at what used to be ISPs and have in this hypothetical future become Interactive Service Providers rather than "internet" service providers, with low latency, high bandwidth hubs in or near your city (and more likely, multiple hubs per city).


The catalyst of such a shift could be a very good thing, e.g. exceptional internet speeds everywhere


Now I expect that there will come usability experts from google/facebook/amazon and tell you that an average user does not event know what latency is.

Native apps have won over web app ecosystem, plainly and simple. WASM will not change things even a bit to root causes of people deciding to use native apps over web apps.


Google and amazon certainly think the average user cares about latency, even if they don't know what it is: https://blog.gigaspaces.com/amazon-found-every-100ms-of-late...


> richness

That includes the problematic features like tracking. Running software from random sources is dangerous in ways we are only beginning to understand.


Let’s keep in mind that software running in a browser is sandboxed, using standardized APIs, and isn’t nearly as privileged as software running directly on your desktop.


Problem is, adding to the bl… sorry, richness, doesn't remove the old cruft.

How about having an application platform that would be just an application platform? No HTML, no CSS, no built in multimedia. Just a VM, a viewport, audio, and inputs (and local storage if the user allows it).


We already tried that - Java Applets. Silverligt was the same idea. And Flash I guess? All attempts failed in the marketplace. It turns out using higher level standards like HTML, CSS, URL's etc actually provide a a lot of value.


Obligatory point that Flash didn't fail in the marketplace. Rather, it was wildly popular, so much so that Microsoft eventually felt the need to develop a comparable tech -- Silverlight.

Flash on the web faltered rapidly after many years of efforts by Mozilla, Apple, Opera, and associated individuals. They were looking to move the web 'forward' had a high-profile disagreement with W3C, so they started their own standard-setting collaboration to specify HTML5 and associated JS APIs. The blogosphere eagerly awaited the results, which promised to formally bring multimedia and rich interactivity to HTML, without having to use a vendor plugin.

When Apple announced that Flash won't be supported on the upcoming first iPhone, it was over. After a few years, when apps came to the iPhone, Adobe failed at marketing the fact that their Flash assets can be compiled into iPhone apps using Adobe AIR.

With existing Flash assets effectively relegated to desktop-only, it was only a matter of time before it was pushed out of the standard browser stack. Although later, both Microsoft and Google shipped Adobe's plugin (with better process isolation) together with the browser or the OS, and hooked into their respective auto-updaters, Flash was on its way out.


Seconded. If Flash wasn't proprietary, it would still thrive. It could have displaced JavaScript itself.


There is an open source version Gnash. I installed it on a laptop a few years back and it ran fairly well, but a bit slow compared to the real Flash. Why do you think that didn't take off?

Edit: Looking at the wikipedia page I see one reason (though I am not convinced its the reason it didn't take off). https://en.wikipedia.org/wiki/Gnash_(software)#Adobe_Flash_P...


It's not fully compatible. Of course it didn't take off. You want the default application to be free software.


They didn't fail in the market, they were actually quite popular (maybe not java, but flash definitely). It is just that Apple killed them by not making them run on mobile. That's more of a case of a major player strong forcing the market.


I was going to say there same thing - this just seems like java Applets have been reinvented


I'm all for building from scratch a lean minimal runtime platform as you describe (that's what operating systems should be), also it should be "easy", take a minimal Linux kernel and add some thin APIs on top. But good luck turning this into a project that gains any traction.

Using the browser platform to bootstrap such a thing isn't such a bad idea in comparison... sooner than later we need to depreciate the ugly parts, like WebRTC, WebAudio and trim some fat here and there, but other platforms have their ugly parts as well (look at the mess that is Android).


Exactly. Leave https alone and use it for hypertext. Have another protocol (vmtps?) that is used for apps. Browsers could support both.


Agreed. It seems we are just piling on one thing after another. Surely we can start fresh with a better eye toward the future.

TLS + HTTP Methods/WebSockets + HTML + CSS + JavaScript + JS frameworks

Plus all the server side stuff. It's all a mess.


But that's also the beauty of it. Within the browser we can support the old and the new without too much trouble.


You need a browser in the first place. The standards are now so big that to write a modern, compliant browser, you need to be an international corporation.

A market with only 4 competitors (Microsoft, Google, Mozilla, Apple), is not a market. It's an oligarchy.


Good luck running an ad blocker when every site is run in wasm and rendered to canvas.


You can still block domains that serve ads.

Also rendering the whole page in a canvas implies forgoing the entire DOM, which leaves behind basics like links, forms, embedded videos, etc. It would also mean getting raw input and drawing+laying out everything yourself instead of letting the browser do it.


That's all true.

But the domain serving the ads might be the same one serving the rest of the content. And if that server is motivated enough to show you that ad, they will do all of the things you mentioned. I've been worried for sometime that this will be the end-game of the ad-blocker wars.


Right, but all that could happen now. Webasm doesn't change anything in that regard.


I agree. I'm pretty excited about webasm, although I'm nervous about ad-blocking in general.


It makes it easier though.


Then ad blocking would move to image recognition and/or automatic binary reverse engineering.


> rich, ubiquitous, standardised application delivery platform

Lots of fancy words for bloat. Oftentimes malicious bloat at that.


I think web needs some more standardized functionality (like popups/modals, better dropdowns styling, datagrids ) so you do not have to implement things yourself or research and find a solution made by a third party.


We shouldn't. All the nonsense we're trying to cram into the Web is making it harder to justify connecting to it. I long for the days when simple images and text were the norm. Nowadays, I need to have and devote constant system resources to a tracking-blocker, cookie-blocker, an ad-blocker, a script-blocker, a separate javascript-blocker, and a who-knows-what-else-blocker, just to do the things I want to do; let alone the things I need to do.


> I long for the days when simple images and text were the norm.

The main issue with the web currently is that tools designed for the purpose of displaying simple images and text, plus a little interactivity, are being stretched to realize complex applications.

Powerful on-demand applications on the web are a good thing, and it's a good thing that we're finally getting the tools to build them properly.


We already had them, WebAssembly adds little to Flash, Java applets, Oberon Juice, ActiveX, Silverlight, other than a format that makes all browser vendors happy.

I can easily imagine that Adobe R&D already has a working WebAssembly prototype for Flash.


Not true. A good security model is added.


It remains to be tested on the wild.

Lets see when the first examples pop up on Project Zero or CCC.


It reuses the existing Javascript security model, which has been tested pretty extensively in the wild.


Really? Yeah, it's been tested extensively, but it's security has also been found to be quite lacking. I'm legitimately surprised someone can say this with a straight face.


>the javascript security model

:Ddd

So you’re saying they have mitigated every cross-origin-based exploit? I bet they really mean it this time.


Nobody's claiming it's flawless. I'm just objecting to the claim that it's untested.


Should I list all of the CVEs related to exploits on JavaScript VMs?

Just today there were a few on another HN thread.


It happens, yes. But we're not building WAsm on a greenfield.


> I can easily imagine that Adobe R&D already has a working WebAssembly prototype for Flash.

Actually Haxe and Openfl will probably beat them to the punch since Adobe has pretty much killed flash by 2020.


Adobe has around 4 guys working on the whole Flash codebase.


>other than a format that makes all browser vendors happy

No small feat!


Sure it is. Find a way to lock down all user access and demand that Google and Microsoft get paid or your data gets it. Mozilla will do whatever Google says. Apple doesn't care as much, a new core tech means a new generation of iPhone/iPad/iPod/iHateThisNamingConvention, which means more money for them.


If the revenue generating nonsense (ads, tracking, etc) weren't on the web they would creep in where every they could. At least on the web we can fairly easily block the majority of ads.

I liked simple text pages of the past but I also appreciate that lots and lots of applications that would only ever see a windows release (and maybe a buggy mac release) will run on whatever platform I want as long as I have a modern browser.

I hear ya though that it's frustrating when you go to a site that is pretty much only text and it just flatout doesn't work without javascript, or they have done something weird with the css and it doesn't reflow properly on a small screen/mobile browser.


Okay, say every browser had a built-in tracking, cookie, ad, script blocker etc. How would you propose websites then make any money?


By asking people to buy what they are selling? If they aren’t selling anything, why the expectation of remuneration?


I don't. I literally don't care about them making money, so I can't propose any method. My problem is that they demand the monetization of the Web and crud up my PC. My solution is 'block everything but text and images.'


That’s not my problem to fix, no proposal is required of me.


Okay, then don't expect anybody to build one. Web browsers are incredibly complex, and nobody is going to build one for free nowadays.


Wait a minute, did I misunderstand? I thought we were talking about how websites would make money, not build browsers. Are you saying that Apple, who doesn’t depend on web advertising AFAICT, wouldn’t build this hypothetical browser because other companies wouldn’t make money? A more macro economic view, then?


>Web browsers are incredibly complex

This is the problem. A much less complex browser, without wasm, without JavaScript, with a simpler DOM... would not be that incredibly complex and expensive to build. So the point is moot.


That's only yet another reason to try and spin off a more user-friendly web, and burn the current thing to the ground.


The web was a pretty exiting place before the suits arrived trying to make money. Hobbyists writing about their passions doesn't require ads. People who can afford a computer to write their blog can also afford the pennies it costs to host a static web site.


They could ask their users. That model seems to be working well. But even if they couldn't make money? Who cares?

The web existed long before it was monetized, and it worked great! Better than now, actually.


I am betting when WebAssembly gets mature all Web sites will look like Flash, just coded on the framework of choice, thus finally making the browser just yet another VM.


In the process, specialised web browsers such as browsers for the blind, will be rendered useless.


Probably, or will make use of whatever features the "native" frameworks might support.

Gtk+ and WPF have very good support for people with disabilities.


Well sure, non-web platforms won't be affected by the self-destructive trend-chasing of the web.

The harm to disabled users remains.


WebAudio-based client-side speech synthesizers are the solution! Finally the blind can pay their fair share too by listening to advertisements :^)


WASM is a three miles step back for the web, along with native DRM implementations.

HTML5 was there to obsolete quirky flash/silverlight/java applets and tell people to not to confuse code and the content, now main "web ecosystem pushers" are doing everything to reincarnate Flash in a new embodiment as WASM


What does webasm enable that would cause such a massive difference in how people create web pages? Everything it does could be done now with javascript or asm.js, all webasm does is enable the same things to run faster.

I can't think of a single site that works like you are saying, yet webasm would only be a 2x-8x speedup over existing techniques.


I guess you missed all those sites using Flash, Silverlight, Applets, WebStart, ClickOnce and many other plugins.

WebAssembly on the long run will enable to bypass JavaScript and target the browser as if it is yet another OS.


> I guess you missed all those sites using Flash, Silverlight, Applets, WebStart, ClickOnce and many other plugins.

I did actually, because most aren't around any more. Even so, I was talking about sites that purely use the html canvas to draw. There is nothing preventing websites from being built like that right now, so I don't see how webasm will make much of a difference.


You're missing it...

Web Assembly would allow devs to build apps without any html or javascript or css.

The UI construct could be WPF/XAML or even WinForms like tech or something new.

Scripting and CSS would be entirely unnecessary. A lot of businesses would love nothing better than to dump the Jedi-like skills of the scripting developers and trade that for mundane forms skills.


you appear to be missing the fact that there is no widespread desire by web developers or businesses to convert their webdev stack to an application development stack.

HTML, javascript and CSS are easier than C, C++ or Rust. Writing a website in HTML/CSS/JS and updating a website in HTML/CSS/JS are vastly simpler than writing a graphical application then rewriting and recompiling it.

As with Flash, WebAssembly will be implemented as fully integrated apps where it makes business sense, which will probably be a limited number of cases, such as graphics or media delivery. Everywhere else, it will either not be implemented at all, or be used alongside JS in the existing web.


Says the web developer.

Qt, Delphi, WPF are miles ahead in terms of tooling in what a pile of HTML, CSS and JavaScript are capable of.

Anvil and tools like OutSystems are probably the one thing that comes close to what Blend is capable of.

Having a pixel perfect WYSIWYG GUI designer, with a components market, painless DB integration and deploying to the web at the press of a button will get lots of enterprise love.


I don't disagree with that. I disagree with your original premise that all websites will eventually be compiled WASM blobs. There's simply no reason for that to ever be the case, no matter how nice the tooling gets.

Not every site is a business site and not every business with a website has the budget or impetus to chase the bleeding edge of web development. HTML and javascript will continue to exist and be supported by browsers for the forseeable future (meaning the option to choose not to use WASM will also always be there.) Billions and billions of sites already on the web which would have to be completely taken down and rebuilt for no practical reason.

I look forward to the age of embedded binary apps on the web . I think the web is the only real option we have to preserve software long-term, and the likelihood of software being preserved is enhanced by that software remaining executable. But even in my wildest fantasies where every program ever written maps to a URL, I doubt that use case will take up more than a fraction of online content. The web is just too big and too complex and too general to reduce to any single heuristic or use case.

People are still using COBOL and Perl and pushing code to production with Notepad++ and Filezilla. The real world doesn't optimize the way you're suggesting it would. The business world certainly doesn't.


No one is suggesting wasm would replace all of the web.

We're just saying the developer and deployment optimization story would be vastly simplified without the JS/HTML/CSS stack.

Those of us who remember what development was like before "front-end" development as it is today know that story very well. The amount of undocumented, untested, and wonky code in the current web stack has always seemed ridiculous to us.

The problem with ActiveX, Flash, Silverlight, and OneClick was never about the developer story. It was about security and openness. WebAssembly solves both of those problems.

Open tools, pixel-perfect GUI design, compilers and debuggers, and strongly-typed languages would absolutely reduce the amount of HTML/JS/CSS in the world. It may not kill all of it, but it would put a sizable dent in it.


With all major browser JIT compiling JavaScript, aren't they already?


No, because it still is the pile of HTML, CSS and JavaScript hacks.

With WebAssembly you can bypass all of it, and do your UI framework in GL, and everything else with native libraries compiled into WebAssembly.


Which is a terrible usability nightmare waiting to happen...

Seriously, DON'T do this. This breaks ctrl-f. This is unlikely to work well for people who need to enlarge text or enhance contrast due to vision impairment. This breaks screen readers. This will probably break most site archive navigators, so your content is lost to history (e.g. wayback machine). This will probably prevent Google from indexing your site, killing your page ranking and driving away potential customers. Even if you re-implement all the things you think it will break, you'll find that your users have things set up in ways you'd never considered...

Just, don't. Please don't do that.


Welcome to the Modern Web. Not sure if you noticed, but most web developers don't give a flying fuck about usability (despite what their company blogs say). A broken navigation is something you encounter daily on the Internet. CTRL+F still kinda works - it can scroll you to the more-less right area of the screen, but quite often the text you search for isn't highlighted anymore. Right-click is routinely broken (most recently with stupid-ass "social sharing" context menus). Copy-paste is routinely broken with stupid inventions that alter the contents of your clipboard in hope you don't notice the extra URL in there. Modern websites turn your computer into a frying pan with the shit ton of JS they execute, most of which is not only unnecessary, but also actively hostile. And/or mine cryptocoins, because it's almost 2018, so we need more dumb inventions.

It's a long topic, but basically boils down to companies being greedy and not giving a fuck about their users, and web developers being too busy chasing shiny to stop and care about actually providing value to users.

> you'll find that your users have things set up in ways you'd never considered...

Web companies don't care. They aim for the majority, which is users with popular browsers on default settings, without ad blockers and any plugins.


Too late, WebAssembly is already here.

I really think it will bring Flash like web sites back, and browser vendors are the ones actually pushing it.

The wheels are already in motion, with everyone trying to port their favorite language or VM into WebAssembly.

For example, today it is Qt WebGL Streaming, tomorrow it might just run directly from WebAssembly.

http://blog.qt.io/blog/2017/07/07/qt-webgl-streaming-merged/

It will be up to those frameworks to provide usability support, like WPF does for Windows.


WebAssembly doesn't change this. There's nothing preventing people from doing this with javascript (plain or asm.js flavored) and canvas/webGL, and yet no one does. It just doesn't make a lot of sense.


Try to disable JavaScript and see how much of the modern Web you are able to enjoy, even on sites which are supposed to display static text.


So? My point is that most who relies on client-side code to render apps, does so in a way that integrates nicely with the browser. Most JS apps render html, they don't draw to a canvas. They could do that, but it would break a lot of functionality that people expect for a program that runs in a browser. This won't change with webasm. It will just make such pages load (and maybe perform) faster.


The point is that wasm doesn't change this; wasm won't make it 'worse'.


Actually,

"Add support for WebAssembly as target platform for Qt

https://bugreports.qt.io/browse/QTBUG-63917


> This will probably prevent Google from indexing your site, killing your page ranking and driving away potential customers.

Actually, it wouldn't surprise me if Google used some sort of AI to recognise the resulting images of text. What this would do is make it exponentially more difficult to start a search competitor to Google. Which might explain why Google is all-in on WebAssembly …


There are reasonable people, who would only do this for applications (such as games or maps), and never for documents. Then there are those who require JavaScript for simple walls of texts.


Are you suggesting that content providers shouldn't be able to control who can select/copy/save the text on their websites? What are you, a communist?


But why on earth would you do that? That would be like bypassing the native UI framework on iOS or Android. It gives a worse experience for users, and is likely more work for you, the developer.


Why couldn't you do that now with canvas?


You can, theoretically, but it isn't practical because of the large amount of Javascript that would need to be downloaded and parsed. The WebAssembly's purpose is to remove that limitation, which is going to trash what's left of the open web. Get ready for lots of websites becoming un-adblockable, accessibility-hostile, "custom ui" trash.


Needing to download a large amount of JavaScript hasn't been a deterrent to anyone for anything yet. These arguments against WebAssembly are just FUD.


> hasn't been a deterrent to anyone for anything yet

This is patently incorrect. Javascript's impact on page load time is so widespread that Google introduced AMP to "enables the creation of websites and ads that are consistently fast ... and high-performing"[1].

[1] https://www.ampproject.org/


If it had been a deterrent, people wouldn't have done it, and we wouldn't have slow loading pages in the first place, and Google wouldn't have an AMP initiative. However, We have slow loading pages, and Google has AMP, which means that load speeds hasn't deterred stopped anyone from adding a ton of JS bloat


Go look at any major site in the wild right now. They all download several MB of JS. Facebook is 1.8MB of JS out of 3.9MB. The BBC is 1.6MB out of 3.8MB. Kotaku is 2.6MB out of 7.5MB. CNN is 2.8MB out of 6.5MB. Hell, here's an article complaining about the size of websites that itself downloads almost 1MB of JS: https://gigaom.com/2014/12/29/the-overweight-web-average-web...

I've written multiplayer VR experiences that run cross-platform through the browser that only clock in at 500KB of JS. You can put a LOT of code in a MB.


You are saying the same thing as he is. Javascript's impact on load time hasn't been a deterrent to people using mountains of bloat in their page.

Thus, webasm being a few times faster to download and parse won't change a whole lot, since few web developers seem to care much about the speed of their pages.

If webasm won't change much, then it isn't reasonable to predict a future of web pages that simply render to the canvas.


You could, but it's a pain and doesn't add much. With this you could in theory port GTK or Qt to WebAssembly and then simply change a compile flag and have your C++ GUI app running on the web.


You don't need WebAssembly to invent your own UI in a canvas. WebGL and all that works from Javascript.


You can do all that with JS too. Wasm adds nothing extra. It's perfectly sensible to imagine wasm applications targeting the DOM (which it can do via JS, and will do natively in later releases) just like JS applications do now.


Yeah you can but then you have exactly the same problems as a browser render so I'm not sure it's useful.


Accessibility?


Another terribly inefficient, klunky psuedo-VM, if the current trend is any indication. It's like 20 years ago all over again: it's Motif all over again, but dressed in different clothes.


You forgot about the lack of standarization in the pseudo-VM, each javascript engine supports a specific set of ECMAScript.


Remember that WebAssembly means no more ad blockers as soon as someone ports freetype. The behavior of a program written in a Turing complete language cannot be predicted without running the program[1]. Ad-blocking by regexp or DOM element doesn't help when the entire contents of the page - including the ads - is just a bunch of {canvas,WebGL} draw calls generated by a blob of obfuscated code.

[1] https://en.wikipedia.org/wiki/Halting_problem


Why is WebAssembly special in this regard? Javascript is also a turing complete language that can just do a bunch of canvas / WebGL draw calls.


Yep. Also this could be approached as an adversarial machine learning problem, analogous to a spam filter. Modern spam filters re pretty good, so if push comes to shove, I'm pretty confident adblockers will adapt to more creative ways to circumvent adblocking

If it leads to an arms race of adblockers vs. websites wanting to serve ads, the longer the race goes on, the longer web-page loading times become. So eventually a website will lose because its loading times will get too long and users, even non-adblocking ones, will go elsewhere


> when the entire contents of the page - including the ads - is just a bunch of {canvas,WebGL} draw calls generated by a blob of obfuscated code.

Your site would also be unreadable by Google, meaning that you'll be heavily penalized in search ranking and no one will find your abomination of a website designed this way.


Yeah, no.

Google is an ad company. They will simply open an API for content providers to push content into their search engine. They don't have to maintain their crawlers, users can't block their ads, and both Google and the content providers get more revenue. It's a win-win situation.

Oh, I forgot the users: they obviously lose! But hey, at least you can write rich applications :^)


>They will simply open an API for content providers to push content into their search engine

Ah, clickfarmers and adfraudsters will welcome that with open hands.


Obviously, it will be protected via client certificates.


If it becomes the norm Google crawler will start running the pages and looking at the content.


It already does


People never had any problem finding Flash and Silverlight websites.


Hmm... How exactly do you know that? I can only think of a handful of full Flash sites I ever used, and of those, I learned of them from word-of-mouth advertising, not through a search engine. (e.g. Homestar Runner). There could have been tons of relevant sites locked up in Flash navigation (back when that was fad) that I never found out about because of their poor SEO -- the world would look the same to me whether they existed or not...


Currently you can block many ads based on their url (which is from a third party provider). Using umatrix, ad and tracking is often clearly marked in red, and the corresponding js never even loaded. I don't think that would change with webasm?


Websites will be obfuscated blobs downloaded from a single server.


That's possible now. You can serve the ad so its indistinguishable from any other image. Redirect it and clicks through your server. Give it an obfuscated DOM path.

Its just a hassle and can't be done with pasting a line of code


Pretty off-topic, but the reason why it isn't done is tracking, not ease of use.

Ad-networks want the ads pulled from their servers so they can track views. If web-owners serve the ads themselves, then the ad-network must trust whatever the web owner says about the numbers of visits/clicks/etc.

For now the ad-networks just don't care about ad blockers, because they are making money hand over fist anyway. If things ever get hairy for them, I suspect they'll switch to a reverse-proxy model. You point your domain to their servers as you do with cloudflare, and they they serve your content with ads injected in the right places, served under the same domain. This would be pretty easy for web-owners and completely nullify ad-blockers in their current incarnation.


The more important reason this isn't being done more often is that advertisers want to push their own code and make connections to their own servers in order to protect themselves against ad fraud, and because they want their own tracking data.


"Its development is backed by people at Mozilla, Microsoft, Google and Apple."

It seems this is for real then?

JS has had a surprisingly long life actually given its original use cases, and I understand the objections to wasm, but I guess that something like this was inevitable given how big the browser is as a platform and how web apps have been steadily fattening on the back of a suboptimal language, which, if this goes ahead, can be confined to just UI again.

They are planning to add support for gc, threads, bigger mem, tail recursion, etc [1], so will it be running everything efficiently? Even on mobile, given its backers?

Wow

[1] https://github.com/WebAssembly/design/blob/master/FutureFeat...


It's probable you can already use it on mobile today. I have. https://caniuse.com/#feat=wasm


The toolchain feels pretty clumsy and bloated, there are many intermediary stages that should be done automatically.

I hope better alternatives will quickly become available, or at least some package where you don't to install a thing inside the thing you already downloaded and installed.


Agreed since the toolchain is aimed at automatic conversion of C/C++ applications to WebAssembly, emulating filesystems, graphics etc, that makes it feel sluggish for me. At this point I haven't seen many people using WebAssembly directly to develop for the web. Mostly it's people who write to a canvas.

This game logic in Rust was a good example of at least some communication with javascript in a non bloated way:

https://news.ycombinator.com/item?id=15843064 https://aochagavia.github.io/js/embedded-rocket.js


Agreed, back when I got the tooling up and running it generated a whole bunch of things to emulate stdio and such. I couldn't figure out how to make a freestanding wasm library.

I wish I could just do `wasmcc -ofoo.wasm foo.c` and get a `foo.wasm` that I can include, without any implicit dependencies and such. Let me supply malloc if I call it. I'm sure there'd be libc-lite libraries in no time (if they aren't there already, I'm out of date).


What felt clumsy or bloated to you in the toolchain? We'd love to improve it, specific feedback would be very helpful.


I'd prefer a C++ compiler that directly compiles to wasm.

Would it be either a G++ or clang++ module, but something a lot more straightforward and simple. No intermediary, no multiple dependencies. Either a simple compiler binary, and if not possible (although I still wonder why developer never release both source code AND binaries), provide a single downloadable repo that I can build using cmake.

I have read tutorials about binaryen and emscriptem, I did not have a fast computer nor a fast internet connexion, and it was so painful and unclear I gave up.

I get that asm.js was great, but it was mostly a hack, wasm should be much cleaner. I don't understand the choice of emscripten.


I'd like to understand you better, I don't think I do yet.

From the user's perspective, isn't emcc a C++ compiler that directly compiles to wasm? There are some internal IRs along the way, but you shouldn't notice them, like you don't notice the GIMPLE IR when using gcc?

The emscripten sdk isn't small, that's true, but that's because LLVM+clang are not small, plus the musl libc and libc++ are not small either. But all those pieces are necessary in order to provide emcc which can compile C++ to wasm. So it can't be just a single repo, multiple components are needed here - in fact, regardless of emscripten, more will be needed soon since the LLVM wasm backend will depend on the lld linker.

What we can maybe do to make this unavoidable complexity seem simpler, is to compile all the necessary things into wasm, and have a single repo containing those builds. So it would contain clang compiled to wasm, lld compiled to wasm, etc. Then the user would check out that one repo and have all the tools immediately usable, on any OS. The wasm builds would be a little slower than native ones, but perhaps the ease of use would justify that - what do you think?


Incremental linking is something that most C++ toolchains support. Lacking it is currently our biggest pain point with emscripten. I'm very happy to be wrong, but my understanding is that it's hard to add that to emscripten because it's constructed using several different tools that have been chained together. I suspect a lighter-weight WASM-specific compiler backend would make standard C++ compiler features such as incremental linking significantly easier to do. Is that true?


Actually emscripten should be getting incremental linking soon. It depends on upstream components, the LLVM wasm backend and lld. Those are almost ready now.

In other words, the emscripten integration for those components is the easy work, the upstream LLVM/lld stuff is the hard part.


I really don’t understand the industry fixation with C++. It has to be one of the WORST (as in “error prone death trap”) computer languages ever devised. God how I wish it would fade away into obscurity.

If half the effort wasted on C++ compilers and usage learning had been spent on something worthwhile...


"There are languages people complain about, and languages nobody use." - Bjarne Stroutup.

I don't think C++ will fade away as it's already a well installed language, and it is getting several upgrades, which are supported by big companies.

There is really nothing comparable to C++ in terms of language, to be really honest. There really are no language which are as readable and flexible as C, AND extensible and high level.

I tried to get interested to statically compiled language like rust, go, D, and honestly I can't get used to them. Safety is not really a good idea because putting barriers between the compiler and the programmer will always result in pain. C++ is the ease of C with some syntactic sugar for more convenient use. I see nothing that can be as good as C in term of down-to-earth syntax.

A good language is not a language that is well designed, a good language is a language everyone can use. C++ to me seems to be the least worse, except of course the toolchain.


I agree. At the moment you can use WebAssembly, but it is not exactly user friendly. Effectively they have only build tools that can work inside other tools or as a part of an existing toolchain.

However, the end game is to be able to load a WebAssembly module just like you load a JS script. So, we are going to get there, but it will take some time.


I am still a bit baffled.

I am a TypeScript user and in the article he states:

For instance, instead of compiling TypeScript to JavaScript, its developers could now compile to WebAssembly.

Alright, so I don't need to be a C/C++ dude to get some WebAssembly goodness (maybe some day).

But now with this in my toolbelt. What occurs? What does it mean? If I have a particle simulation in TypeScript on Canvas using Shaders... Now I compile to WebAssembly, but now what? Can I still access Canvas or is Canvas like a 4th dimension? IF I can't use Canvas, how do I render to screen?


WebAssembly is basically a VM, a JIT'd execution engine. It doesn't know about canvas, DOM etc ... those are library features available on the side, as it were, on a browser. Those libraries will be available to any languages compiling to WebAssembly.


The end game is to have a WebAssembly module that you can load just like you now load a JS script. When we get there, there will also be (probably[1]) easy methods to access the DOM from WebAssembly.

At the moment the easiest way would probably be to write some glue code in JavaScript (see Let’s write Pong in WebAssembly https://medium.com/@mbebenita/lets-write-pong-in-webassembly...).

[1] https://github.com/WebAssembly/gc/blob/master/proposals/gc/O...


You're baffled because the author has no idea what they're talking about. TypeScript is a superset of JS, it makes no sense to compile it to a platform designed for AoT-compiled systems programming languages.


There is no need to be rude.

TypeScript was used as an example of a well-known language that currently is transpiled to JavaScript. I am not saying that TypeScript will surely be compiled to WebAssembly, just that it could.

TypeScript is designed for development of large applications. It is a superset of JavaScript both because this facilitates learning, but also because there was really not an alternative. In the end you had to compile to JavaScript. It is not hard to imagine a language with the same objective of TypeScript that is compiled to WebAssembly. Furthermore, if TypeScript was compiled to WebAssembly you could use it wherever there was a platform for the WebAssembly format. So if somebody created a project to consume WASM binary files and execute them from within .NET assemblies[1], you could use TypeScript outside the browser and node.

[1] https://www.hanselman.com/blog/NETAndWebAssemblyIsThisTheFut...


> There is no need to be rude.

I apologise. I'm just… rather surprised at such an idea.

> It is not hard to imagine a language with the same objective of TypeScript that is compiled to WebAssembly.

It is not hard to an imagine a strongly-typed language targeting WebAssembly, sure. But TypeScript is not that kind of language. TypeScript is just JavaScript, and JavaScript, as a highly dynamic language, is a very poor candidate for the kind of ahead-of-time compilation to a low-level target that WebAssembly was made for. I mean… it could, technically, be done, but why would anyone do so? The result would be bigger and slower than normal JS.


Let's agree that is feasible, but unlikely.


> ...the author has no idea what they're talking about. TypeScript is a superset of JS, it makes no sense to compile it to ...

I agree with this. Mentioning compilation of TypeScript to WebAssembly shows some misunderstandings of both of these technologies. WebAssembly was designed to run code already written in some low level language like C/C++ because running already written software in a browser is easier than rewriting it. Of course you can compile Rust into WebAssembly but that was not a reason for why WebAssembly (and it's predecessors, asm.js, PNaCL) were conceived.

On the other hand TypeScript makes JavaScript programming easier for when the app becomes too big to maintain without tooling.


Since TypeScript is a transpiler, couldn't it output a new "MyLittlePony Lang" for example and then WebAssembly from that?


It could output whatever it likes, you can compile anything to anything.



https://github.com/AssemblyScript/assemblyscript/blob/master...

This doesn't look like traditional TypeScript to me. The explicit loads/stores give away the whole trick.

https://github.com/AssemblyScript/assemblyscript/blob/master...

And it requires JS twice the size of the TypeScript to actually use the output generated by AssemblyScript. I personally don't see WASM a valid target for AssemblyScript for another year or two, until either the gc or host-bindings proposals lands.


As far as I know it's a bit like marshaling for JavaScript.

In theory, you wouldn't have to parse JavaScript, when delivering WASM, which should lead to faster startup times.


Yes, there are already quite a few ongoing ports to WebAssembly, including TypeScript dialects.

https://github.com/mbasso/awesome-wasm


You can access Canvas in WebAssembly, no problem here.


Now, more obfuscated than ever! Better DRM! A few more layers of bloat! Buffer overflows are back!

Search for WebAssembly demos. There's a crappy tank game.[1] WebGL is doing all the real work; the game part is simple. There's ZenGarden.[2] 200MB download. Again, WebGL is doing all the real work.

WebAssembly is going to be a way for sites to bypass ad blockers. That's the real use case. Maybe the only use case. It's all about reducing user power and giving total control to the big sites.

[1] http://webassembly.org/demo/Tanks/ [2] https://s3.amazonaws.com/mozilla-games/ZenGarden/EpicZenGard...


It's not always WebGL-y stuff. For instance Qt is working on a WASM compilation target:

> https://msorvig.github.io/qt-webassembly-examples/widgets_wi...

cf. https://bugreports.qt.io/browse/QTBUG-63917

The WASM binaries are around 10 megabytes for this. That's a lot, for sure, but this includes a whole windowing system, a raster paint engine, and a C++ reflection mechanism.


this includes a whole windowing system, a raster paint engine, and a C++ reflection mechanism

All of which the browser already has.


Yeah but it doesn't have qt built in.


sure, but if you want to get your existing 500kloc Qt app to show quickly on screen to make a demo for instance, you aren't going to rewrite it in HTML/CSS/JS for the fun.


I recently made a small project with WebAssembly: https://github.com/novoselrok/color-palette-wasm

The toolchain (emscripten) is still pretty bloated and I hope they will streamline the process somewhat.


Quoting from the article: "It will make developing for the web easier and more efficient."

Um .. no. It will make code execute faster but it will be more work to write and maintain.

For most of the daily bread-and-butter code that we produce, speed is not awfully important but clarity is. In this equation, you gain a great advantage from stuff like garbage collection, functional programming, and a well equipped package manager, i.e. Javascript is a pretty good choice (and Typescript an even better choice).

On the other hand, if you're writing very specialized algorithmic code which will be executed a very large amount of times, then WASM and fine tuning is in place. As cool a technology as WASM may be, a very small fraction of our code is like that.

My prediction is that WASM will find its place, mainly within some of the more popular libraries available on npm. If there is anything to gain from rewriting, say, parts of ReactJS in WASM then it will eventually be done. But it will be totally transparent to the user of said library.

Taking a view at the server side landscape is also instructive. There is a plethora of languages which execute faster than JS, yet nodeJS thrives. So it doesn't seem like people are eager to ditch JS. Hater's gonna hate, of course.


People should look at existing emscripten projects to see what the future will be like since compiling to asm.js is already pretty mature (just a bit slow and bloated as noted in the article). Any emscripten project can compile to WASM trivially by adding a `-s WASM=1` compilation flag.

Example projects:

- OCR (tesseract): http://tesseract.projectnaptha.com/

- Computer vision (opencv): https://docs.opencv.org/master/d5/d10/tutorial_js_root.html

- Physics engines, retro games/emulators, entire operating systems, and many many more: https://github.com/kripken/emscripten/wiki/Porting-Examples-...


I'll plug my emscripten project here too :)

https://quiet.github.io/quiet-js


> JavaScript has a bad reputation, but in reality is a good language for what is was designed for: quickly write small scripts.

That's a little condescending, considering the amount of non-trivial, critical applications written in JavaScript. And it ignores the progress made on the language and tooling of the last oh i don't know, 20 years.


Non-trivial, sure. Critical? In what sense?


Node.js uses javascript as its default language interface, and considering it is software intended for server-side, with a userbase ranging from Microsoft, IBM, Netflix, Yahoo and Paypal (to name a few) in production environments, I would assert without doubt that it is in fact critical.


Critical for the survival of many businesses? Dealing with important data and processes? In that sense. Not in a system-critical kind of sense (kernels, etc).

I mean I'm glad that when I call 911 the dispatching system probably isn't written in JavaScript, but it's likely that there's a web ui somewhere in there. Police cars too, run web applications on those touchscreens.

I'm definitely aware of JS shortcomings, having had to deal with them for many years. I just found that particular quote to be a little snarky, and outdated.


> In practical terms, WebAssembly is implemented by browsers’ developers on the back of the existing JavaScript engine.

Isn't this contrary to the whole idea of wasm?

EDIT: this is a completely serious question, I honestly don't understand why this is built into the existing JS engines instead of something separate.


WASM uses the current JS/VM environments to leverage its existing infrastructure (i.e. low-level abstraction and security). In practice it is a pre-optimized bytecode that only relies on JS primitives, so it does not need dynamic typing features or GC, i.e. the slowest and least safe parts of JS. Which means that anything compiled to WASM is "near-native" in terms of performance, making the use of a separate VM unnecessary.


If I understood correctly the main reason was ease of implementation for a MVP.

at this point webassembly run in a sorta VM with a security model added. the fact that wasm and JS share the interpreter should not have (ignoring bugs of the implementation, which is not trivial) a security effect


From the site: The kind of binary format being considered for WebAssembly can be natively decoded much faster than JavaScript can be parsed (experiments show more than 20× faster). On mobile, large compiled codes can easily take 20–40 seconds just to parse, so native decoding (especially when combined with other techniques like streaming for better-than-gzip compression) is critical to providing a good cold-load user experience. – from the FAQ of WebAssembly

If you build a website and have to parse something for 20-40 secs for a user, they should fire you on the spot for bad design. Start working for adult sites, if loads longer then 2 secs, redesign.


You are right only for the average website. If you are building applications delivered through the web, like games and advanced webmail, it can take a few seconds right now. With WebAssembly you could basically create all desktop applications on the web and it would be perfectly reasonable to take a few seconds to load one of them.


Gmail takes around 5 seconds to load over a 100mbit connection. Seems to not have posed any problem for them.


Just because it works and makes money doesn't mean it isn't poorly designed. It may be that some people care about their users, more than their company's profit.

Weird, I know.


2 and 5 seconds are far away from 20-40.


I can't tell immediately, but is there anything that compiles WASM to something native? As in I'd like to write some WASM and then compile that to a shared library that a C program could load in.


Just write it in Rust and have the compiler emit wasm or a shared library.


Why go from WASM to native? If you're using LLVM to get your WASM, just go directly to native, if other, find a native implementation of the language you're using.


For non-dev idiots like me, what is it and why is it special? I got the impression that it allows compiled scripts to run in the browser and is faster than JavaScript.


It potentially allows any language (C, C++, Rust, .NET, heck in the future maybe Java, Scala or anything you want) to be run inside a webbrowser, at near native speed.

Because any language would be compiled to this intermediate form (wasm) it would drastically simplify the distribution of complex applications.

Because any language can be compiled to that form, you can potentially reuse your existing company's code to run it inside a webbrowser

Because it's already integrated in web browsers, it means the technology deployment promise to be quick.

Because it's just an intermediary form, you can use a typesafe language (Javascript isn't) and still use it at native speed rather than going through a step of conversion to javascript.

It's still missing a few key things (garbage collection, interaction with the DOM) but I see that as being the new standard for pretty much anything. I can see that being used everywhere, from server side to client side, to mobile devices, why not hardware support (Is that possible?)


And it will pretty much allow to have a full app that bypasses the Apple/google app stores and their pay tolls/random policies.


So, potentially, an OS could just be the browser and it would suffice for running apps like Android, iOS etc. already do?


I feel like a really big point people gloss over when it comes to WebAssembly is browser support. It’ll be years before we can drop JavaScript (not that I want to) because we’ll be waiting for IE and friends to drop of the face of the planet. WebAssembly could act like a catalyst to propel the web forward or segment it even more. I’m leaning more towards the latter, at least for the next 7-10 years or so.


Many sites can already afford to drop IE entirely. Frameworks such as Ember have dropped support for every version except 11. It is not going to take 7-10 years for you to be targeting evergreen browsers.


IE is already dead. Everybody non savvy has been force upgraded to windows 10 with edge and the savvy ones don't use IE.


The more I read about WebAssembly, the more I think it is the rebirth of Flash, or Silverlight, or JavaFX without being tied to a proprietary entity (Adobe, MS, Oracle).

Will it be really good for the web in general? I guess it will depend on the motivation of those who would peruse the technology.


Why? For almost everyone, it would be less work and give better results to just have a WebASM app render HTML. Except for "legacy" apps written in C++ and some native UI framework, why would anyone bypass using what is already there and works in a browser? It would be like writing an iOS or Android app by using C++ and bypassing the native UI toolkit.


what does WebAssembly mean For script-kiddies and malicious sites? I mean : with great power.....


Nothing. It can't do more than asm.js-compiled Javascript is able to do today.


I doubt this will take off, because developers seem to prefer Javascript as a language.

On the server where there is a choice of several languages both interpreted and compiled, Nodejs is still very popular. If there were a war against Javascript people would be moving away from Node.

My guess is that this is going to be another niche, underutilised technology (like WebRTC or even SVG) that is efficient leads to highly optimised products, and but lacks mainstream usage.


There is no "war against Javascript" as the author suggests. It just happens to be the chosen language of the now insidious ad-driven corporate-sponsored web browser.

It is not Javascript per se, but the so-called "modern" browser that creates problems for so many users. The problem is not even that this program exists. The problem is that users are coerced to use it and it is controlled by a third party that needs to sell something, either to advertisers or to users.

Again, there is nothing inherently threatening about Javascript. There are many languages to choose from and anyone can choose not to use it. It is not tied to the web browser anymore. Unfortunately it is from that association that it gains a stigma like the one the author suggests with his "war on Javascript" comment.

Today, there are a variety of standlone JS interpreters, of all sizes, and the language is routinely used outside of the browser. Peer-to-peer projects use it. Tiny interpreters for microcontrollers use it. Javascript can live on without the corporate web browser.

The problem is the irrational assumption one must use a corporate sponsored web browser to do anything interesting with a computer. This program automatically runs code from "anonymous", commercial third parties, which today happens to be JS, using the user's computer resources. The user today does not get to choose the language, but more importantly, given the level of coercion and absence of alternatives, she does not get to choose whether or not to run the third party code.


> There is no "war against Javascript" as the author suggests.

Well, there's my war on Javascript, that I wage in my head every time I have to use it.

> Again, there is nothing inherently threatening about Javascript.

Well, it's not inherent, but it exists. Javascript has (had, with WebAssembly) an inherent advantage in that even though you could compile to JavaScript, there was an inherent mismatch the farther your language semantics were from it. CoffeeScript is straightforward. Others are less straightforward. Rust requires a huge amount of runtime to be shipped even for WebAssembly.

> It is not tied to the web browser anymore.

So, the beachhead is secure, time to advance into the countryside, eh? ;)


> It is not Javascript per se, but the so-called "modern" browser that creates problems for so many users.

I think that it's both-and, not either-or. JavaScript itself is a terrible, horrible, no-good language; the modern browser environment is also a terrible, horrible, no-good invasion of privacy and destruction of security.

I do not believe that there is any purpose for which JavaScript is a suitable language (absent considerations of popularity, e.g. as on the web browser), although I could be wrong and am open to counterexamples.


It was more of a lighthearted joke about the attitude that many have toward JavaScript than an attack on the language itself.

I have nothing against JavaScript, although it is not a splendid example of design it certainly works for what was designed for. But I think it also undeniable that would choose something else for many tasks, if they had the option.

If you wanted, you could also use JavaScript for an entire operating systems just for JavaScript[1]. However this does not mean that it is good idea to do so.

[1] https://github.com/NodeOS/NodeOS


Not everybody wants “Scheme in the browser”. In the spirit of “Worse is Better”, plenty of people want to run C++ in the browser. I’m not one of them, but hey, whatever floats your boat.

That said, I’d like a few useful libraries (e.g. - fixed precision decimal arithmetic) written in web assembler. But otherwise, I actually prefer writing in JavaScript (as FP more than OOP) over C++/Java/C# (degenerate Simula 67 clones) for most application level tasks.

Writing in C++ won’t make DOM access or async I/O go away, though :-)


I haven't been able to find many examples of working WASM projects or experiments for a topic I have been hearing about for years, but here's one that seemed pretty well made:

https://d2jta7o2zej4pf.cloudfront.net/

However it appears all the video editting effects render slower through WASM than JS, what's the deal?


JS has been optimized to the extreme for years and years. Compilers that compile to WASM are still new so I'm guessing there's still a lot of room for optimization.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: