Hacker News new | past | comments | ask | show | jobs | submit login
WebAssembly: A New Hope (pspdfkit.com)
199 points by steipete on Aug 9, 2017 | hide | past | favorite | 109 comments



TL:DR; They have written a PS & PDF renderer that runs completely in the client, by using web-assembly. They show the performance to be not so far off using native code, and this allows offloading the CPU load of PDF rendering to the client rather than the server.

Excellent !!


Correction: last I checked, they didn't write the PDF renderer themselves, it is Google's PDFium[1] which was open-sourced a while ago. Before they were using Apple's built in PDF engine, but apparently that had too many bugs for them.

It does not do Postscript, the "PS" in PSPDFKit are the initials of the creator, Peter Steinberger (really nice guy, by the way!). Two/three letter prefixes are common in Objective-C as there are no namespaces (I use MPW).

[1] https://pdfium.googlesource.com


mpweiher, you are correct! PSPDF is a pdfium lipstick. You can find its license in binary sdk-s or simply ask them. Included or derived work must be disclosed!

Also using the pdfium via Enscripten is not news (https://github.com/coolwanglu/PDFium.js is 3 years old).

On commercial, battle tested side PDFTron offers PDFNetJS & WebViewer for years https://blog.pdftron.com/2015/11/10/pdfnetjs-html5-pdf-viewe... (http://xodo.com/app is the demo referenced in the presentation).


Based on Peter's blog post[0] a few years ago, I was under the impression they wrote their own.

Maybe steipete can chime in?

[0]https://pspdfkit.com/blog/2016/a-pragmatic-approach-to-cross...


Hmmm...don't remember where I got that information from, so I could well be remembering wrong.

I do see that pspdfkit folk are active in the pdfium bug tracker, for example:

https://bugs.chromium.org/p/pdfium/issues/detail?id=641

That sounds like a production problem, but it could just as well be them trying it out.


It's certainly quite plausible that they compare their behavior with other engines, and then file bugs on those other engines if they find them.


Thanks, i thought PS stood for "postscript"!


My latest laptop is from 2009 and I very very very rarely complain about pdf.js performance.


If anyone has any questions I'll try and answer if I can.


Did you show it to Mozilla or Google? It seems that it can replace PDF.JS and whatever closed-source PDF reader chrome uses.


Chrome's PDF reader is actually open-source: https://pdfium.googlesource.com/pdfium/



And as far as I know it's the engine they're using.


I think WASM is great, but it also is not a panacea.

You can see this demonstrated in their demo where highlighting the text garbles it -- because they render without using browser technologies, they miss out on browser implementations of things like highlighting. (Another example: their "get in touch" link in the demo is not a real link and can't be right-clicked, indexed by search engines, etc.)

In the specific context of a PDF that is perhaps inevitable (PDFs likely need special text layout) but in the larger context of random native apps, you only get a "free" port to the web by cutting out web features.

With that said, let me emphasize again that in the right domains this is all super awesome. It just doesn't solve the problem of making good webapps.


PSPDFKit CTO here.

I 100% agree with you that our current implementation misses out on lots of browser-provided native behaviour. But this is not an inherent limitation of WebAssembly.

PSPDFKit started as an iOS-only framework. We then abstracted out a shared C++ layer which has tight hooks with the platform (Android or iOS).

PSPDFKit for Web is still a young product, one feature we have on our long-term roadmap is to have the browser handle rendering text natively. Our WebAssembly Core can "extract" WebFonts for the glyphs from the PDF and position them using HTML tags. This truly gives you the best of both worlds. PDF files need to be rendered absolutely accurately, so for now we went with rendering to an image.

If you're interested in our cross-platform strategy, you might be interested in https://pspdfkit.com/blog/2016/a-pragmatic-approach-to-cross...


Great work guys! Keep on releasing more interesting libs for wasm!


Your complaints have little to do with wasm; as the post says it has no direct access to the DOM.

wasm is only supposed to speed up heavy computations like those needed to decode and rasterize the PDF, modifying the page is still done through javascript.


Yup there's zero reason this couldn't just output a dom manipulation via FFI into straight JS code. It's fairly straightforward to do with wasm/asm.js.


Just curious: Has anyone does this yet? Specifically, I'm thinking of something vdom-like with the possibility of callbacks into wasm. Obviously, they would have to go some type of string marshalling and "proxied" IDs for callbacks, but the applications I'm thinking of don't require "huge" performance.


Yup, you call a c function that calls out to JS and then make whatever dom/canvas calls you want.


Do you happen to have a link to an implementation for wasm?


Not publicly, the emscripten docs covers it in pretty good detail:

http://kripken.github.io/emscripten-site/docs/porting/connec...


i don't understand the point you're trying to make.

i can't comment on the highlight-garble problem you mention, because i don't see it (firefox x64 for linux); however, the link "problem" has nothing to do with WASM. it's a PDF link, not an HTML link, and those have always been a completely separate concept. the same "problem" would exist however you're rendering the PDF, unless you convert it to HTML (which i don't think is the goal of pspdfkit).


What WASM also misses is good multithreading support, shared data (between threads), locking primitives, and things like memory fence instructions.


Well, that is, in a big part, true also for native desktop apps ported to the web. Nevertheless, the gain is so big it's still worth it.


This is not a comment on the work done by these no doubt well-intentioned folks. But to pull a quote from their article:

"compile our 500.000 LOC C++ core to WebAssembly and asm.js"

This is absolutely terrible for the web :-( We've escaped activex, flash, java applets, and now we're bringing lots of legacy c++ to the web in a way that conflicts with the underlying tenets?


I'd be more concerned if it weren't already basically impossible to figure out WTF is going on on any "web app" using View Source.

I wish we'd all agreed to severely cripple Javascript[1] early on and instead improved built-in form elements and stuff like frames, but we didn't so here we are. The web's already mostly transitioned from being for people to being for machines and the delivery thereof, so webassembly's not going to ruin anything. It's already ruined.

[1] Say, no ability to trigger a broad set of events that would ordinarily be user-initated, no ability to submit forms itself, no XHR or similar. Probably some other stuff I'm not thinking of right now. So, almost but not quite worthless. That would have been necessary if we'd wanted to keep the people browsing in control of the Web. Instead we chased "oooh, shiny!" and took that control away, practically speaking.


I don't see how that does anything except force logic to the server and guarantee page-by-page update and view semantics. At least some large percentage of the code for some of these "modern" web apps is on my computer: that means a lot; and anything we can do to discourage code running in the cloud with data stored in the cloud--which absolutely implies giving complex turing complete control over local behavior and expansive state--is a fundamental win for users of the World Wide Web.


> I don't see how that does anything except force logic to the server and guarantee page-by-page update and view semantics.

Which keeps the delivered document a document, and not an application.

> At least some large percentage of the code for some of these "modern" web apps is on my computer: that means a lot; and anything we can do to discourage code running in the cloud with data stored in the cloud--which absolutely implies giving complex turing complete control over local behavior and expansive state--is a fundamental win for users of the World Wide Web.

I consider it much better that only explicitly-requested actions (clicking a "submit" button, or a link, for example[1]) trigger submission of data and execution on a server than to let javascript trigger the same. Control over behavior and data is better retained, especially for lay users, if it's impossible for an ordinary web page to send every mouse movement and keystroke to a server—something that's in fact done, all the time. Retaining that control is well worth pushing web "app" logic to the server.

Programs that need local execution would just have to be delivered as... a local program, not a web page. Losing that distinction cost the average user control and safety when they're in their browser, and fundamentally changed what the web is. Some of us much preferred what the web used to be, not because we hate progress, but because progress that destroys something that was itself valuable kinda sucks. "Web apps" should have had some other delivery channel, if they had to exist—and clearly there's a demand for better cross-platform GUI application delivery. Making browsers serve that purpose has made the web fat, opaque, and untrustworthy.

[1 EDIT] and in fact I'd rather restrict submission of new data, that the server didn't already have, to form elements the appearance of which cannot be modified at all, like a "submit" button. No JS submit-form-or-add-querystring-data-to-URL-on-link-click shenanigans. On my ideal web links would be safe to click, always, with only form submit buttons carrying any risk at all (and even that far less than now, if, say, JS isn't permitted to add to or modify form data in a POSTable form, which is another restriction JS would have on my ideal Web)


> that conflicts with the underlying tenets?

The underlying tenets are a bunch of technologies designed for rendering documents + one pretty horrible programming language. Which we're now building full-fledged apps and games on. I think that's way more of an abuse of the underlying tenets than webassembly is. I truly believe webassembly will set us free in many ways.


In which way it is terrible? It is a Ps and PDF rendering engine. I don't know if you are already aware of this, but Postcript is almost a complete programming language, so writing a Ps renderer is like writing an interpreter (or compiler...)

Not trivial stuff at all.

Also, why do you call this "legacy C++ code"?


As I noted, my comment is not about their code. Nor the functionality for that matter. And yes, I'm aware of the issues in PS and PDF rendering, having worked on them as part of a not-so-common and very-common operating system that deeply used them.

Wasm is designed around bringing non-web-native languages to the web. A good project would have been to enable new web native languages to complement JavaScript. Instead, there's an emphasis particularly on bringing C++ to the web. I think this is counter to the philosophy of the platform and harmful in the long run.


It's really about bringing any language that can target LLVM IR to the web, and that's more than just C++.

https://en.wikipedia.org/wiki/LLVM#Front_ends

C++ just happens to be a popular choice because there are many useful C++ libraries, and the compiler already produces very optimized code, so the performance benefit is most noticeable. In particular, I'm excited to get the Lua interpreter, written in C, on the web in a performant way.


There would be no issue with that, if you were forced to deliver source code, with comments, and without obfuscation instead of a highly obfuscated binary format.

WebAssembly makes it even more common to run untrusted, random code on your system. The next RowHammer.js will be a lot more powerful.


> In particular, I'm excited to get the Lua interpreter, written in C, on the web in a performant way.

Can't wait for this!


Why should they complement Javascript? The browser should be language-agnostic, just as your bare metal (computer) is.

Otherwise, then, we should tell people they "must" use C to write applications and that othrr languages should be there only to "complement C."


Understand the hesitancy. But we develop a very high quality engine for as many platforms as we can.

I.e. we can maintain less code, keep it fast, maintain high quality and identical behaviour across platforms.


I agree, it's 100% rational from your point of view, just as compiling it for the Flash VM or running as an ActiveX component would have made perfect sense for you to bring your high-quality code to more platforms.

My hesitation isn't about your code. It's about what we as a community are doing to the web platform.


Yeah it will be interesting. I feel like having used this now that we'd be poorer without it. But you are right. There are ethical challenges ahead.


Except WebAsm is language independent and reuses DOM (indirectly through javascript).


WebAssembly might be even bigger than the web. People already use the V8 JS engine server side, and make client side apps with Electron. WebAssembly could become the universal cross platform low level VM.


It's not possible. As soon as WebAssembly gets as big as the web, someone will implement a browser-embedded browser in WebAssembly that runs in the browser and we will be back to square 1 in this kafkaesque nightmare we call Web.


Like running the Windows version of Gimp in Wine in X Window in Chrome inside Firefox on a Mac? http://imgur.com/a/wRals

From: https://www.destroyallsoftware.com/talks/the-birth-and-death...


Lol. Can't wait for this piece of art to arrive, plus compile JavaScript web apps could compile to binaries


Down votes are unfair, this was quite funny.


Why would you think that it is a joke? It is true and how things have always gone!

When I got into computers it would have been ridiculous to imagine carrying around a computer running a different OS in a virtual machine, within which we were running a virtual network of small virtual machines existed to run programs written in a scripting language that itself runs in a VM!

Sounds crazy, right? And yet it isn't rare to find a developer on a Mac laptop running Linux in VirtualBox, inside of which they run Docker containers to run applications built on node.js. This is not done as a joke, it is done because it is a useful development environment for an application that is supposed to eventually be deployed to a cloud.

Portable software has been a motivating dream of our industry since forever. It was a selling point of COBOL, Pascal, HTML, Java, Flash, JQuery, and so on and so forth. The idea of deploying a known client target to get rid of browser differences is inevitable. It is just a question of who will be the first to do one that is good and fast enough to get mindshare.


Honestly, it's going to be huge, imo. And like all good tech, can be used for good and evil :)

We want the best experience for users and devs that we can deliver.


The question is: Why?

Multi-platform makes sense client-side, where you have to deploy to Windows/Mac/Linux/FreeBSD/Android/iOS on x86 and ARM.

You know your server arch. You know your server OS. As long as you don't use C or C++ (or you're carful to be multiplatform), all switching is a recompile away.


Using the same platform for everything has advantages, e.g. code sharing between client/desktop/mobile/server, mastering 1 standard library as opposed to several, seamless serialisation of data between platforms. From what I've gleaned from the docs so far, WebAssembly is very well designed and more pleasant to target than LLVM. Having a single compiler ensures that the code behaves the same on each platform.

If you are asking why I think it may happen rather than why it would be a good idea: imagine that you are a compiler writer for language X. You can target WebAssembly and run the language not only on the web, but also cross platform on servers and desktops and potentially mobile, and you even get a cross platform standard library including networking, UI (DOM/HTML), OpenGL, and more. This will save a tremendous amount of work compared to targeting LLVM separately and implementing your own cross platform libraries (which requires expertise on each platform that the compiler writer is unlikely to have). It's like targeting the JVM but lower level (compiler writers like this) and runs seamlessly on the web like JS.

Browser WebAssembly implementations will compile really fast because it has to be loaded and compiled for use on websites, making it suitable for a really fast compiler for interactive development and REPL, while it will be possible to compile to fast code via LLVM.


> As long as you don't use C or C++ (or you're carful to be multiplatform), all switching is a recompile away.

why do you say "as long as you don't use C or C++" ? these two languages are especially good when it comes to compile a single code base to multiple platforms. Got a 150kloc C++ app that "just compiles" to windows, mac, linux, android, iOS, under x86, x86_64, ARM, and it even worked on PPC macs last time I tried.


Because you have to be careful to be multiplatform. For example, your code can break if you assume an int is 4 bytes when in your target it's 8 (hard coded values for pointer arithmetic), or vice versa.


well, I and scores of other programmers must be quite lucky because I don't ever remember having to assume or leverage the fact that int or whatever data type was 4 or 8 bytes.


But the reality of production systems is much more complex than this. Many medium- to large-production systems are written in multiple languages across large and diverse teams, have legacy components, varied open source components, and so on, all talking to each other. Sometimes this is as simple as sending messages around, but sometimes not. C is usually the lingua franca since it's the common denominator on most systems. Maybe WASM will end up in a place to better serve that role?


You don't want to go bankrupt running a server side hls encoder when you can run it in the browser and simply punt the playlist chunks up to a simple s3 bucket directly from the client running ffmpeg in the browser say for a live broadcasting web app.


You don't always know youer server arch and OS. If you sell solutions to others they may choose the server OS and you might want to support as many as possible.


>WebAssembly could become the universal cross platform low level VM.

You know, people tried to do the same thing countless times already, there is Java and it tries to do exactly that. We all know where Java's promise "to run everywhere" went.

There are JVMs for things beginning from microcontrollers to s390, but find one that can flawlessly run code that was compiled for a VM that is even one major releases old.

Serious software in Java usually comes with something saying "version 2 of this software runs on JVM 'A' 1.4.13 with patchset 'B' 1.2, classpath 'C' 1.5, and exact VM setting from supplied jvm.conf".


The JVM is also massively more complex than WebAssembly. It defines its own object model, its own memory management system, its own enormous standard library.

WebAssembly is how these things should have been done from the start. Leave the object model, memory management, and libraries to the languages built on top, where they can be changed without affecting the backwards compatibility story of the VM itself.


WebAssembly is unlikely to have this issue because it has to run on the web and browsers don't want to break backwards compatibility. Even with these issues the JVM has actually been quite successful as a language platform.


The edge cases they cited seem to run counter to this expectation:

"At this point, we want to issue a special thank you to the WebAssembly teams at Mozilla and Google, especially Alon Zakai, for being so helpful. We did run into a few edge cases but with their help we were able to still make it happen and even improved the Emscripten tool chain a bit along the way."

If some percentage of realistic projects run into edge cases that are incompatible with either the spec or between browsers, it violates the whole "write once run anywhere" goal.


WebAssembly is in its infancy.


> WebAssembly is in its infancy

Exactly.

Expectations that WebAssembly will be more cross-platform than any previous attempt at cross-platform are probably overly optimistic at this point.

Though I'm hopeful, as well.


These failures are why the runtime ships with executable code within .NET Core, Golang and Rust.


"It's important to point out that it is designed to complement JavaScript not replace it and that in a browser context it has no direct access to the DOM, only via JavaScript."

That maybe true of current implementations but the design docs[0] appear to suggest that you will be able to import platform features in WebAssembly.

[0] https://github.com/WebAssembly/design/blob/master/Portabilit...


Agreed, there was active discussion about this at the last WebAssembly CG meeting even: https://github.com/WebAssembly/meetings/blob/master/2017/CG-...


The biggest issue that I see and mentioned before is that the APIs are not versioned and instead rely on 'feature detection' which seems to be due to poor support for/understanding of modern style versioned modules in C++.


> understanding of modern style versioned modules in C++.

Could you enlighten us about this? Is this something to do with the eternal C++ modules proposal that never seems to make it into a standard?

The thing that comes to my mind when I hear "versioned modules" and "C++" is the way you can use "versioned" namespace re-exports to avoid ABI breakage, like so:

    // header for version 1 of library
    namespace mylib {
    namespace v1 {
      struct Foo {
        int num_apples;
      };
    }
    using namespace v1;
    }
    
    // header for version 2 of library
    namespace mylib {
    namespace v1 {
      class Foo {
        int num_apples;
      };
    }
    namespace v2 {
      class Foo {
        int num_apples;
        int num_bananas;
      };
    }
    using namespace v2;
    }
But I don't think that's what you mean.


Eventually it should replace Javascript. It ought to, because it defines a standard, versatile platform to execute code on the browser.


Although you may be right I hope you are not. I like the semi open source nature and high levelness of JavaScript. It's not perfect but it beats dissembling things. Yes I know there are obfuscators.


.NET team is also looking seriously in some C# love for WebAssembly eco-system. Its not a real project here yet but we have made some progress, check out this great interview:

.NET Rocks! #1455 - WebAssembly and Blazor with Steve Sanderson https://www.youtube.com/watch?v=cCdF9-q4n5k


I'm sure some here will laugh but I cannot wait for the day I'm able to ditch JavaScript and TypeScript and just write all my in-browser code in C# via WebAsm.


I think this is true of any language that isn't Javascript or one of its derivatives. When writing web apps we've had this strange setup where the server side can be any language but the browser side had to be JavaScript. It makes it hard to write your code only once and then use it in both contexts which is useful for things like input validation where you want to run it client side to give good and fast feedback in forms but need to do the same validation server side anyway. It also limits programmers to using JavaScript whether they like it or not. This has led to solutions like Google's GWT and on the reverse direction to the popularity of JavaScript on the server side. Hopefully that limitation is finally lifted and the choice of languages for the web becomes much less constrained. Discussions about the flaws of JavaScript are also often very animated because people that don't like it feel forced to use it to be able to program for the web. Hopefully this fixes that too.


+1000

Me too! C# or Common Lisp or Python or whatever, choosing the best choice for the specific project and circumstances.


Downloading .NET Framework v4.0 WASM Edition... 1%... 4 GB to go.


The download for the fully functional app in the video is ~300K, of which ~100K is bootstrap.js, so no this is not a lot of webasm being talked about to provide .NET support


From the github (https://github.com/SteveSanderson/Blazor):

> Is this actually .NET in the browser?

> It's not the regular .NET Framework or .NET Core runtime. It's a third-party .NET runtime called DotNetAnywhere, which has been updated and extended in various ways to support being compiled to WebAssembly, to load and run .NET Core assemblies, and with some additional functionality such as basic reflection and so on.

> Can I build a real production app with this?

> No. It's incredibly incomplete. For example, many of the .NET APIs you would expect to be able to use are not implemented.


Awesome!


I wonder when Node.js will (if?) support WASM. Assuming it does, then this becomes a cross platform server-side solution for PDF generation AND THAT is something I could actually use this very day.


There are already node/webkit based "server-side solutions for PDF generation" like phantomjs.

And what they showcase here is a PDF document renderer, not a PDF generator.


Oops - misinterpretation of skimming the article. Damn day job getting in the way of keeping up with HN articles! :)


It already does!

Try it out.


I'd be interested to know what (aside from the new wasm support) the PSPDFKit standalone web viewer offers over PDFJS. We're currently using the latter, which is good enough for our needs, stable and actively developed, and open source. It does a lot of the work via Web Workers, and on desktop at least, it's performant enough for viewing even unusually large PDFs full of raster images.


PSPDFKit for Web is a hybrid renderer. It can render on the client or server, depending on device capability and power. This allows you to view a 500MB+ file even on a slow connection via your smartphone, make notes that are instantly synced to our on-premise server via https://pspdfkit.com/instant/ and then later you can continue to markup the file on either your desktop browser or maybe a native iOS app such as https://pdfviewer.io/.

We currently deliver SDKs for all major platforms with Windows coming in Q4 and they all have the same high-fidelity renderer and deliver the same result.

With pdf.js, how the output renders also depends on your browser, and it just doesn't support some of the more complex graphics with knockout groups, patterns and gradients.

See Mozilla's presentation where they say that full support for these is not even planned: https://youtu.be/4yLzRoErOHw?t=20m27s

Our customers are in Enterprise B2B space. Some might still use IE11 (which we fully support). Some care that their PDF files do not leak and only want streamed access. Most care about annotation syncing. Some apply watermarks on-demand. We solve a different category of needs than the more consumer-focussed pdf.js.


If you were creating a new general purpose language language today would you target WASM? And what's the latest on garbage collection for WASM?


Garbage collection is a "Post-MVP" task[0], which is presumably why the official .NET projects haven't formally started targeting WebAsm yet.

[0] https://github.com/WebAssembly/design/issues/1079


Great to know it's on the roadmap. Once a GC is in place, WASM seems like the universal platform of the future for computing. Fast startup, efficient, and the same everywhere. We were supposed to get a world of hundreds or thousands of cores for each application, but reality is most everything is moving towards small single threaded in the cloud like serverless.


If most of the more recent languages are anything to go by you'd target LLVM and in the future get a WASM support for free.


As WASM gets higher level constructs like structs and GC, you might be wiser to compile to WASM (assuming the WASM-to-LLVM backend is high quality). WASM is primarily focused on being cross-platform and deterministic. I remember when Rust was being written, the devs complained about platform-specific LLVM IR they had to emit.

The big question is who will make the first WASM stdlib iface so backends can implement it. Otherwise, you might leverage LLVM right now because you have libc.


> It's important to point out that it is designed to complement JavaScript not replace it and that in a browser context it has no direct access to the DOM, only via JavaScript.

Why so? Why can't you avoid using JavaScript?


I've always found PDF.js (i.e. Firefox) to be a nicer experience than Chrome's PDFium, which makes me wonder what this adds.


A big question is, can WebAssembly be used to speed up e.g. Atom and VSC general operation?

Or is the way those are doing things mostly incompatible?


    more powerful features, such as threads, are planned as well

What, no more async hell when coding for the browser?


Weird, most people seem to prefer the async model. It makes things clunky but it's fairly easy to reason about.


We will quickly find out that while WebAssembly might sound like a next step is a bad thing to happen to the web - ads will be binary blobs that consume 100% CPU and adblockers won't work and cannot be implemented without wasting even more CPU. Beside that the web will get more and more closed down - sound like something that Bill Gates had originally in mind in 1994/95 with pay-per-view Win32 based TheMicrosoftNetwork that never took off, because of the free WWW with ads skyrocketed.

Edit: Re "Anything wasm can do, JavaScript can do." ASM.js was one thing and ok, wasm is the real concern. What about the "binary blobs" do you not understand. The usual downvoters who have vast interest in succeeding of this questionable tech.


It's not that difficult to use 100% CPU with JS now you know. And Chrome has already taken steps to limiting CPU usage, for power saving reasons. You can implement adblocking similarly to how it's handled now, on a per-domain basis.

Edit: As browser sandboxes get better, running "binary blobs" gets less and less risky. We're a long way from "curl file | bash".


Anything wasm can do, JavaScript can do. And vice versa. Just wasm is faster.


I don't think that's right. For example wasm can't do DOM manipulation, although it might be added in the future. But as the article mentions, wasm will be getting even more different, as things like SIMD get added to wasm while SIMD.js has been abandoned.


I don't understand how would webassembly be worse than the way it is now.

Take a real webapp. Say something like Google Docs. Can you read it and understand it? What about gmail? Or Facebook?

Practically, they're a binary blob running on your computer. Not only that, but they could be hiding malware, and even if you viewed source you wouldn't be able to tell it's there.

Now maybe if JS was meant to be slow, and if there was no such thing as AJAX, and there would be no way for JS to send/receive data from the server (say, by modifying a GIF url), then I could see a complaint that a browser shouldn't run arbitrary downloaded code on your computer without opt-in. But that ship sailed over a decade ago.


I've never really thought this argument was a good one. "Hey, we've got a pretty terrible system in place, so instead of fixing it, or replacing it with something better, lets just give up and replace it with something worse!"

I think WASM can and will bring a lot of cool stuff to the web, just like JS did. What worries me is that we learned virtually nothing from the mistakes that we made with JS.


> What worries me is that we learned virtually nothing from the mistakes that we made with JS.

Just out of curiosity, what did you learn from JS's mistakes? What would you like to see happen in place of WASM?


This is quick and dirty, but...

Giving a lot of power to the server is a bad idea. Users should be able to audit, control and modify how the data is processed in a very granular way. Strong artificial limiters on how many resources something can use. Basically, it should be a fully tunable VM for each process. Readable and modifiable by the end-user, without a lot of hoops to jump through.

tl;dr: the more power that the end-user has over the code, the better.


Ah, I see. I even mostly agree with you.

But it seems unlikely things will go that way. People are motivated by what they can make webapps do, and will take the quickest way to get there. Doing it right, where the end user has the most control, is more work to design and implement, so people won't bother.


Agreed, which is why we need to severely limit what the servers can do with the clients. The security and usability fallout from this is going to be terrible, and it pains me to see the internet I love head this direction just to make a couple bucks.


The lesson I learned from JS is that you don't (or at least try not to) hardcode a language as a scripting solution. Especially a not-so-fast one as JS (yes, it's been worked on like crazy over the past decades, but writing fast scripts requires black magic (intimate knowledge of runtime), decades of work by the V8 team and hundreds of engineers)


> What about the "binary blobs" do you not understand.

I don't understand the assumption that minified and obfuscated javascript is easier to reverse-engineer than compiled wasm is. This is simply untrue. In both cases the user is, in general, running a closed source (in the GNU sense of "the preferred format for code modification") system that doesn't have their best interests in mind.

Adblockers are much the same war as antivirus was: you want to run untrusted code but still have some say over which computation it performs. Unlike with antivirus, sandboxing and isolation don't help. We're just going to be writing heuristics to strip ads forever (or until we standardize on a non-executable document format, e.g. html-without-scripts).


I agree that this will be terrible. We really need to put more tools in the hands of the users, not the developers. Instead of adding more and more closed off technology, we should be pushing to take away the tools that are currently being abused.

Unfortunately, doing the right thing isn't as profitable, so we've collectively chosen to fuck the users instead.


WebASM doesn’t have direct DOM access so I don’t see how ad blockers would work any differently than they do now.


Yes. And why? So some bro could be a "founder" of a project to get a promotion. Fly around on expenses holding meetings and act all important. I've been in the room. The pseudo-PC bro-ness is nauseating.


How did we get here? Please don't post off-topic inflammation.


No bitterness there, nope.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: