Hacker News new | past | comments | ask | show | jobs | submit login
Google, Microsoft, Mozilla and Others Team Up to Launch WebAssembly (techcrunch.com)
248 points by mirceasoaica on June 18, 2015 | hide | past | favorite | 150 comments



At this rate, I predict that about ten years from now, they'll just ditch the bloody memory-leaking browser and just ship out a VM and a language that will have a small GUI library with a DOM-like organization in the standard library compiling for it.

Sure, there are going to be nitpickers who will point out that we had that in 2005 and it was called Java, but what do they know about innovation.


I'm aware you're being satirical but it's an interesting argument which I discussed with a friend yesterday.

Java-in-the-browser wasn't a terrible idea per se. Some aspects of it were poor but the tech landscape back then was very different both in capability and in politics.


Indeed. The JVM-in-applet downsides (slow startup, memory hog due to fixed heap, unresizable, stuck in its box) were mostly contingent rather than necessary. There's no reason those mistakes have to be made again.


I don't think the idea is "Java-in-the-browser" it's "browser-in-the-Java".


+1, I wanted to post a very similar comment :)

But lets discuss further, we also should admit the reality, not just reject with satirical statements. Java didn't dominate the web front-end technologies. It's worth to consider why. The points I see:

    - user needs to download and install JRE
    - bad integration with HTML document,
      applet is just a rectangular self contained box
      on the page; and it's very inconvenient to 
      script DOM nodes, handle events from applets
    - browser vendors (especially microsoft)
      were against java becoming the dominating platform
    - in 2005 java was the only language running on JVM (unlike today).
BTW, I wouldn't say java was very slow to startup.


In the '90s, I despised Java and kept applets turned off because they took forever to download, and once downloaded, would make my machine freeze for a few minutes while the JVM started up.

It's not so much of a problem now that we all have broadband and fast CPUs, but in the '90s it was enough for me to bin Java as "a slow piece of shit" and avoid it like the plague. That probably contributed to it not taking off as a client-side web platform.


My point was that, eventually, it won't even be a web frontend. They'll just get the web browser out of the equation. Of course, a web "widget" is still going to be there, it just won't be a program you open.

I do believe Java failed mostly due to politics -- and more precisely, due to this:

> browser vendors (especially microsoft) were against java becoming the dominating platform

Everyone agreed, in principle, that a portable, high-performance VM was what we needed. The problem was that every vendor insisted it had to be theirs, while ever so slightly sabotaging other vendors.

In the meantime, they all had to provide a working web browser.


You have to admit that Java-for-the-web was terribly insecure and a common way to spread malware.

http://www.pcworld.com/article/2030778/researchers-javas-sec...

Honestly, that is why I always avoided it.


I don't recall seeing much in the way of security vulnerabilities with Java applets until about 2012-2013.

The other issues (slow startup, lack of page integration, unattractive UI) were dominant before that.


Well, probably you are right. In this case it is the main lesson to learn - if we want something to happen, we must think how to "hack" the social system, how to refactor the political/social situation. Technically a common platform for applications is not a difficult problem. BTW, I do not blame Microsoft more than others; as you say, every vendor tried to sabotage others, including Sun who didn't suggest a solution sufficiently beneficial, or unavoidable, for everyone.


Java's actually a lot older than that, 1995 I believe. Though I'm not sure when Java applets became available in browsers (certainly before 2005). It's a sobering thought that whilst (in my opinion) Java applets were a very good idea, the implementation left much to be desired and it's going to take us more than 20 years to turn those ideas into a good workable solution (of which WebAssembly may be a part).


Web applets were widespread by 1996, I remember writing a few.


Absolutely, but 1996-ish Java was really lousy.


What we need is declarative html markup again (JS optional).

CSS also should be optional.

Otherwise, long live adblock, noscript, user stylesheets, ...


That's what we need for web browsers.

Unfortunately, every other web designer today seems to think I don't want to read a bloody article, but rather to be engaged by an interactive article-reading application that's basically impossible to distinguish from native applications, except for those eighty quirks that are definitely going to be solved by morehacks.js and those new CSS perversions.

Web browser developers seem to cater towards those needs, which is how we ended up with browsers where I can run fifty gazillion floating-point instructions per second in JavaScript but it takes me five seconds to find a bookmark, three of which are spent hovering over the titlebar until I remember there's no menubar anymore.


In firefox, luckily, you can still have a menubar ;)

Even in the latest nightly – press [Alt], click "View", "Toolbar" and check "Menu Bar".


Oh. I can't believe I missed that and kept hovering over it like an idiot, then going aaaah, crap, I have to press Alt. Jesus Christ.


I don't understand, isn't CSS and JS optional today?


Disable JS and CSS and try to use the internet.


I surf with JavaScript disabled by default and selectively enabled for a few frequently used sites that directly benefit from it. I rarely find sites that are unreadable without JS--certainly less often than I used to experience sites that were unreadable because of it. On the rare occasions I find a site that wont work without JS (most common symptom:completely blank page) my decision more often than not is to close the tab and move on with my life. I don't think I'm missing out on much and my computer's fans no longer scream constantly when my machine is idle with the usual dozens of open tabs.


I've run into several sites that have weird issues without Javascript, but they seem to always be things that could have been implemented with traditional markup: missing form components, misplaced images, things like that. I'd say about 50 percent of the time[1], however, media-focused sites with complicated image-viewing "galleries" or a more obscure video player, are totally useless without it.

It can be frustrating to have to go through this process of navigating to a site, realizing I've broken it, and then reloading with all the crap turned back on, but yeah, like you said, it's better than having my CPU revved up just to have those "SIGN UP FOR OUR NEWSLETTERS!" modals flying around the screen.

[1]Totally made this up.


That's because the designers of most web pages want the features they provide. When you say "optional" you don't actually mean optional, you mean removed. Such a thing already exists, it's called Gopher[1].

[1]https://en.wikipedia.org/wiki/Gopher_(protocol)


Most websites today don't even gracefully degrade. One of the trendy blog/article sites that gets posted here regularly (it might be Medium) is just a column of text with lots of whitespace, but the text is loaded via AJAX, so without JS, you can't even read it.


Sure, I know that, what's your solution though?


An opt-in labelling and discovery/search mechanism for ClassicWeb™ sites?


HA! That would actually be lovely. Something easily-discovered, like a "isactualhypertextnotaturingtarpit" attribute for the <meta> tag (OK, maybe something less verbose) would probably solve half the problem.


Actually, Flash also runs on a virtual machine, and its language (Actionscript) is a dialect of JS that gets compiled into bytecode.


Java failed because of terrible slow gui libs, Microsoft corrupting the standard, and long start-up times. It could have gone fine.


Java has survived these 3 points. I think that Java is an excellent language that is dying because Oracle has not handled correctly security issues and because Oracle has a very bad reputation.

Security ought to be the strong point of Java, not its weak point.


It survived but it didn't dominate the way that was expected. In the late '90s, Java was positioned and expected to be in the place that webapps live today - that is, something within your browser that would handle interactive live client/server applications.


If you are kidding (I just can't decide), be aware that Microsoft killed Netscape at the 90's because Netscape was starting to sell exactly that VM.

When Netscape became Mozilla, it shipped with XUL runner, that is exactly that, but nobody used it.

Firefox OS is that, again, sold for smartphones.

If they come out with a good VM (measured by the languages it can run), and a good DOM-like organization, it will happen in no time. But if we get another XUL, it just won't happen.


| When Netscape became Mozilla, it shipped with XUL runner, that is exactly that, but nobody used it.

i would hazard to disagree with that statement - the company i was working for at the time shipped thousands of devices our with the XUL runner, and i wrote several XPCOM objects to support the custom hardware we were shipping at the time.

i still miss some of the niceties that came from that.


How is XUL similar to Web Assembly at all?


XUL aimed to replace the desktop with web applications, Web Assembly may do just that.

But you are right in that there's no similarity in structure at all.


XPCOM isn't sandboxed, is it?


I think you'll be disappointing. I predict that the browser in its current form will not be going away, no matter how much some people may wish for it.


This is important because that could mean the beginning of the end of JavaScript.

But also it means the end of the native apps. Apparently Microsoft and Google are playing hard against Apple.


Hooray! Can't wait to use some real language for web apps finally! We might be at the brink of a revolution due for a long time!


Serious question, because a lot of people have had this sentiment; what is it you think you'll get from WebAssembly as far as new languages that you don't already get from transpile-to-JS languages today? There's everything from very dynamic languages like Opal to typed languages like Elm. What else do you expect?


Efficiency and ease of debugging? Just because it's possible to transpile to JS doesn't mean the resulting code will be fast/responsive or easy to debug. Which in turn means it won't be used for significant projects.


Debugging will use SourceMaps which already exist today. As for efficiency, I haven't read anything that suggests WebAssembly will be more efficient than asm.js, why would it be?


Try reading this: https://brendaneich.com/2015/06/from-asm-js-to-webassembly/

"Why: asm.js is great, but once engines optimize for it, the parser becomes the hot spot — very hot on mobile devices. Transport compression is required and saves bandwidth, but decompression before parsing hurts."


Imagine you create a JVM or CLR to WebAssembly converter. All languages and tooling available for them are instantly available for web. Life's good ;-)


Exactly! I've been doing the happy dance for 15 minutes and my wife, after looking at me a bit funny, totally understands.


The article states the "engineers on the webkit project" are also in on this standard. Does that not imply that Apple is also backing this?


If you cannot prevent something, you might as well be a part of it. Unofficially.


Sorry but without Apple a technology like this would go nowhere. Half the mobile traffic on the web comes from iOS and is significant enough on Desktop. And is there something Apple has done with WebKit to deliberately prevent a technology from being adopted ? They supported WebGL for example.


This technology is backwards compatible (at least at first), so a particular vendor's support is not needed, in the short term.


Apple doesnt give a shit about native apps vs web apps. Recall that the original plan for 3rd party apps on the iPhone was web apps only--they only allowed native apps because so many people clamored for them.

JavascriptCore is a top-notch javascript engine for the web. If Web Assembly takes off, Apple will just integrate Swift with it, ship a low-power, high-performance engine, and keep selling a ton of devices.


Yay! Is HTML5 winning war on smartwatches yet? Sorry, but the native apps are much more than some language or binary code.


It would also be the end of user side scripts. It will enable the worst of Flash, Silverlight, Java. Horrible DRM schemes and other user hostile features will be built with this. It's a step back.


This will be created by google and microsoft. Be sure they'll find a way to include DRM inside.


How does this mean the end of native apps ?

You still have network issues such as latency, download limits, black spots etc to deal with.

And personally I don't like the idea of sending my contacts, photos, mail, calendar etc to every third party under the sun just to do anything.


The web has had offline capabilities for years, now with ServiceWorkers it's even better. This argument no longer holds water.


Event better is "might work, sometimes"?


Huh? What do you mean exactly by this?


I think the author is referring to using Swift instead of phonegap/cordova/transpilers, since performance will be improved and you're not limited to just javascript as the originating language.


Native apps give you many complex native libraries including UIs.


You do realize Apple is also part of the effort?

https://bugs.webkit.org/show_bug.cgi?id=146064


Apple is its own third worst enemy (after Google and Microsoft).


Not really you still need to deal with the DOM and Web apis. This is certainly the end of javascript however.


I don't think so. I think compiled and interpreted languages both have a role to play in web applications, just like they do on the desktop. edit: and on the server.


Why stop with WebAssembly? Why not address the entire notion that we're tricking document-rendering browsers into being application runtimes?


I actually don't see a problem with using a document renderer for applications, assuming it's actually treated that way. The document-rendering of a browser is lower-level than a UI toolkit, and that makes it more powerful in a lot of ways. But we need to be able to build abstractions on top of it to actually build UIs. I think react.js's approach (or something similar) is actually pretty good here.

A lot of the problems we run into are people treating DOM nodes as if they were UI widgets, when in reality they are lower-level primitives for drawing and capturing input.


There's canvas, build your own UI toolkit on top of that if you don't like the DOM.


Well, it's not suggested to stop with WebAssembly, it's just aт evolutionary step from the situation formed to this time.


The Unity team has already made some tests for their Engine with WebGL and WebAssembly and it seems promising: http://blogs.unity3d.com/2015/06/18/webgl-webassembly-and-fe...


Hopefully we will not need Unity3d to have high-performance C# engines in browser.


How else you will get C# in the browser? C# is high-level, WebAssembly is low-level, something needs to do lowering.

Edit: I don't see how future addition of GC to WebAssembly changes anything I wrote above.


You write a compiler from C# to WebAssembly. I don't see how C# is different from any other language in that regard.


Hopefully it's IL -> WebAssembly, not C# -> WebAssembly.


I think we are in agreement? Unity3d provides C# to JavaScript compiler.

I replied to "Hopefully we will not need compiler to have high-performance C# engines in browser", because that does not make sense.


Unity3d provides C# to JavaScript compiler

Unity3d does too many things (except new versions of C# :P) and ties to itself. There should be a direct C# -> WebAssembly compiler in future.


Unity is much, much more than a compiler. You'd have to drag a lot of stuff with you and deal with a license.



Is the purpose of this to make ad blocking harder? It's much harder to work on an executable form than a declarative one like HTML/CSS.

Really, how many useful web pages need to execute much code? Games, sure. Other than that, not so much. Is this trip really necessary? We just got rid of Flash, after all.

(And, of course, it's basically Java, round 2. "Write once, debug everywhere.")


"The team notes that the idea here is not to replace JavaScript, by the way, but to allow many more languages to be compiled for the Web."

Yeah, good luck with that. Ta ta Javascript :-)


I hope Dart lang team will jump into this wagon as one of first users.


Dart and TypeScript will be as irrelevant as JS. Why Dart if you can have Go/Scala/F#/Haskell/Ruby/etc. compiled to WebAssembly?


If. All languages you listed need GC, which won't be in the initial version of WebAssembly. In other words, you won't be able to compile Ruby to WebAssembly any time soon.

You will be able to compile Ruby implementation to WebAssembly, but you can already compile that to asm.js. That is different from compiling Ruby to WebAssembly.


Yes and no. You can have native VM for running WebAssembly's Ruby VM... Oh my god, that sounds horrible as I wrote it.. but it can certainly be better than asm.js.


In comparison with TypeScript, Dart has different semantic and less legacy debts than JS.

And I think the whole point of WebAssembly is to have a choice, not "perfect language".


JIT (generating code in memory as data and jumping to it) is one of a few things not very well supported in asm.js and presumably the initial version of WebAssembly, so Dart probably can't use the initial version and should wait.


Brendan Eich deserves big respect for his help with WebAssembly. Thank you! And, of course, thanks to all involved.

WebAssembly also can be new JVM for desktop apps and widgets.


Bob Nystrom argues [0] that a language VM has important advantages over a bytecode VM. His primary argument is development iteration speed.

How does this compare to WebAssembly?

[0]: https://www.dartlang.org/articles/why-not-bytecode/


WebAssembly (and asm.js) is not really a "VM" as in JVM or .NET in that it does not provide services they do, like GC and adaptive JIT. I think the best description of WebAssembly is a reincarnation of p-code.


It is very much a VM, in that it executes generic bytecode. Furthermore the Javascript engines of modern browsers (on which this proposal is based) do JIT JS. Not having GC doesn't mean it's not a VM.

Quoting the wikipedia article of p-code:

> In computer programming, a p-code machine ... is a virtual machine designed to execute p-code


Wasm is not a bytecode. It is a binary AST.


Yes, it is a VM, but not "VM as in JVM or .NET". "VM" in Bob Nystrom's article is used as a later sense. Hence the article mostly does not apply to WebAssembly.

On the topic of JIT, the whole point of WebAssembly (and asm.js) is that it is so simple that you don't need JIT to execute it. You can AOT it fine.


This is very welcome news.

Don't get me wrong; I like JavaScript. It has its warts, and it can cause all kinds of problems in the wrong hands, but both of those things are true of all popular languages.

However, performance has always been a sore spot for the current JS-to-machine-instruction process. Being able to write code that will execute faster is important, regardless of the language that code was originally written in.


Sigh, seriously all this front end stuff is becoming a PITA. Maybe it's just me but I still find the web a mess today. Lately I've been feeling the app stores sound more appealing than actual web development. JS this, 5 new frameworks a week that, some byte code tech... All this change is making me nauseous. Time to stick to the back-end I guess, maybe with a pinch of native apps.


Would it be wrong to interpret this as the possible end of browser focused engineers?

It seems to me that consumers have spoken that they prefer native apps (especially mobile ones) that they can quickly find on their phone's home screen. Consumers are using desktop web apps less comparatively to native mobile ones.

So rather than people making web apps and then trying to port them to native, we're going to have a bunch of native apps that port to the web?

OR

... maybe web goes back to being the defacto standard if people start making insane unity engine powered byte code based web apps that perform like native..


It will also make malware faster and more difficult to reverse engineer.


security nightmares ahead - running random binaries from the web - but wet dreams of privacy invaders of all sorts becoming real.


It will sit on top of the existing JavaScript engines and APIs, so no new security issues there.


This is no issue of language, this is issue of APIs and runtimes. Binary serialization of code has to do very little with security.


No. Categorically not. Is this intentional FUD or can't you discriminate between a program in source and compiled forms?


As if the parser of JavaScript source code can't have security issues.


Wow. If C/C++ code can run at near native speeds on most browsers, specially mobile browsers, this could make a dent on app stores revenues, specially for IAP based games.


You will still be using sandboxed DOM APIs (that adds both overhead and restricted API access), you won't have access to app store stuff like in-app purchases and your adds will be blocked by adblock that will be shipping built-in to browsers.


I wonder what the FSF might say about this development.


The draft design documents state that any WebAssembly code will have an equivalent text representation - so that is almost comparable to human readable source code. It will not be a linear list of instructions like an assembly source code but will be structured as WebAssembly is supposed to have an AST representation. [1]

I cannot see why FSF will have problem with this then, since if you are distributing a .wasm file you can construct an isomorphic text source code representation from it. This is in spirit with the user should get the source code of whatever they are executing. (Though I agree, it would not be equally readable as original C/C++ source code from the .wasm file was compiled, but then something is better than nothing)

[1] https://github.com/WebAssembly/design/blob/master/TextFormat...


The FSF is very clear what they mean by "source code"; the GPL defines it as "the preferred form of the work for making modifications to it". Decompiled .wasm isn't the preferred form (as you say yourself it's not going to be as readable as the original), so it's not source code, any more than the output of a Java decompiler or a disassembler would be.


Wouldn't you have the same problem with a minifier, then? Minified js isn't "preferred" for anything except parse time.


From what I've seen, the current GNU approach to JavaScript emphasizes encouraging websites to use code that is actually free software (i.e. legally redistributable, etc.) and marked as such; I expect they consider that more meaningful than merely being technically able to look at the source.[1][2] You can do that just as well with assembled code, so I don't think they'd care that much, but I can't speak for them.

[1] http://www.gnu.org/philosophy/javascript-trap.en.html

[2] http://www.gnu.org/software/librejs/


I don't think it makes much difference from an open/legal point of view. In fact it could clarify things: any F/OSS project compiling down to this target will have the source somewhere available and anyone wanting not to join the party can keep their code obfuscated. In fact it would make just copy+pasting OS code into proprietary projects less convenient and remove the "how was I to know?" defence: anyone using the code would have had to search out the repo and could not have missed the licensing terms unless stupid or deliberately ignoring them.


Yeah, since it is binary format, will the web start to serve closed-source JS-like scripts?


There's nothing stopping people distributing closed-source obfuscated scripts at the moment. There's not a massive difference between minified javascript and decompiled asm.js.


Am I the only one who thinks there actually is quite a difference between minified & obfuscated javascript vs bytecode?

Minification & obfuscation can only do so much without changing the logic of the script. Sure, spaces are removed, variable and function names make no sense, but most of the logic is still there just as the developer intended.

If we look at compiled Java/C++/Any high level language code, how the application logic is presented is substantially different from the original logic of the application. Making it so much harder to understand how the application works.


I disagree. Obfuscation is essentially the same as a separate compilation and can do as little or as much to distort the original logic as compiling to a lower level language.

E.g. an obfuscator could make all method calls into one identical named overload, while a compiler could emit appropriately named subroutines. The compiler preserves the logic better in this case.

Minimization is something different, but minimizers do not attempt obfuscation, it is more of a side effect of their goal.


Not much of a difference if you're running your JS code through Googles closure (https://developers.google.com/closure/), this basically does a complete re-compilation of your JS, removing dead code, unused variables, and lots of other optimizations, and in the end spits out a big ASCII blob that could just as well be byte code. It has nothing in common with the original source code.


And then you throw JSNice at it, give it to a college student, wait a weekend, and have a nicely readable source version.

Source: Am college student, for fun I disassemble websites, including the funny VM Google built for ReCaptcha.

Additionally, I wonder what the EU thinks about this, as anyone who has the ability to use a software has the right to take it apart, inspect it, and learn from it. This right can not be signed away with contracts (making the "Do not decompile" clause invalid) and is violated by all these closed source web projects.

Tbh, I should probably just decompile, deobfuscate and refactor the Google Inbox client source, and publish it on GitHub over summer break, just to show Google how useless and annoying their obfuscation is.


I don't think so. There's a plan for a text representation of WebAssembly to show when you view source. That should be pretty similar to obfuscated JS


java bytecode can be decompiled to a very high level source code, unless explicit measures are added to actually obfuscate it.


Try view source on any Google web app, like Inbox. For all intents and purposes it's pretty much closed-source already - deciphering that mess is a nightmare


it can be a nightmare but I did extract some small features from gmail... they minify names and things like that but they also add bloat in their obfuscation process. But I still think that a binary web would be a step backward.


Be aware that change already happened years ago, between minifiers, obfuscators and asm.js.


This came up so quickly, nobody was able to create the WebAssembly page on Wikipedia yet [1], it seems.

  [1] https://en.wikipedia.org/wiki/WebAssembly


    The WebAssembly team decided to go with a binary
    format because that code can be compressed even
    more than the standard JavaScript text files and
    because it’s much faster for the engine to decode
    the binary format
Are there real world examples of websites where the size of the JavaScript and the time it needs to decode it make up a significant portion of the load time?


Gmail ? Google Docs ?

And it might also be the final blow on Flash.


Did flash need a final blow? I assumed it is still around in small pockets because of projects that are abandoned and casual developers who don't intend to learn a new set of tools.


It's more pervasive and hard-to-kill than you think. At least these days we don't have project managers asking "can we add Flash?"


I have a project where downloading the JS takes about 30s on my 40Mbps home connection....

Maybe not so many "websites", but there a lot of javascript applications will benefit greatly cutting down on the size and parsing time.


Why are you serving 150MB of JavaScript??


It's become "a normal thing" to take GB-size games and compile them to asm.js. WebAssembly will absolutely make a difference.


In Unity's tests, they were able to reduce the code size of their WebGL demo from 19.0 MB of asm.js to 6.3 MB of WebAssembly code (uncompressed).

http://blogs.unity3d.com/2015/06/18/webgl-webassembly-and-fe...


Make this thing mainstream and, yes, there will be plenty.

Why would you compile your application into x86 code if you can compile into WebAssembly, distribute inline on your web page, and just cache it at the client's machine?

(Yes, there'll be plenty of whys. There'll also be plenty of applications for what none of the whys apply.)


Games.


I must say about DAMN TIME!


I'm learning Rust and it would be nice to see a Rust powered kernel, just for hack with it a learn a lot.


This is the post I created to discuss this on reddit : https://www.reddit.com/r/rust/comments/3abgbo/webassembly_ru...


Does this get us integers in Javascript?


Yes, because you already had them. (It uses the Emscripten tricks to use integers in JS, which VMs readily understand at this point.)

It doesn't add Value Objects yet though—that's a separate proposal.


Yay!!! Choices, choices, choices. The future is bright and full of opportunities.


Earlier discussion on this topic: https://news.ycombinator.com/item?id=9732827


And how will this affect security and privacy on the Web?


Why would it? Bytecode is not substantially harder or easier to disassemble and grok than minified and obfuscated JS.


does already a plugin exist that blocks this crap?


Does this eventually end up as yet another virtualization layer? How big and bloated does the web stack become before we start again, with a lean protocol and markup language for text documents?


WebAssembly will have a backwards JS polyfill, but browsers could implement WebAssembly natively and gut the JS middle layer. Mozilla (and then Microsoft and Apple) did the same thing with asm.js. WebAssembly becomes the unifying abstraction layer instead of JS.


> WebAssembly becomes the unifying abstraction layer instead of JS.

I wonder how long it'll take until browsers implement JS in WebAssembly, to make it easier to maintain/upgrade.


Let's ditch HTML/CSS for XAML! No more googling how to vertically align a block in another block. I'm in 100%.


That would be a dream.


Flexbox


Markdown over Gopher? Yes please.


The lean protocol and markup language will still be there to use though. This just gives us a new tool that is right for a different set of jobs and hopefully integrates well with the other parts when needed.


Assuming HTTP and HTML ever qualified as "lean".


Fair point. CSS too.

But they are leaner than piles of javascript which is in turn leaner (at least from a dev stack point of view) than a full compilation stack and process.

We have quite a range of useful options for our needs now:

* Plain text. Sometimes it is all you need. Just server text/plain via HTTP and be done. Add markdown or similar as you see fit.

* HTML+CSS for most occasions because you want to add at least a little style.

* Add in a little JS when you need some basic interactivity or because you are presenting a lot of information and the option to hide/collapse some of it is useful to the user.

* Add a lot of JS and libraries once you are getting into proper "application" territory rather than just a fancy page of information.

* Start compiling form other languages when for your project needs some constructs not available or easily emulatable.

* Start considering WebAssembly when your project really really needs the performance gains possible over JIT compiled JS (or you want the binary format for obfuscation purposes)

None of those options invalidates the ones before it, so it really does come down to having a good selection of tools and picking the right one for the job. The fact it is all cross-platform (at least it is if you ignore "legacy" browsers like IE6 and Android 2.3.x, don't mind using less efficient polyfills for the not-quite-legacy-yet options, and are careful to remain compatible with a range of screen sizes and (once you introduce interactivity) input methods) and the big names appear to be playing nice enough is icing on the cake.

I think we are living in good times in this respect.



[deleted]


That comic doesn't hold if all the major parties support the new standard.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: