Very cool that they managed to get SpiderMonkey working through Emscripten. But I wonder if they could have just used Narcissus[1] instead and saved themselves some time.
Narcissus only runs on spidermonkey because it uses nonstandard spidermonkey extensions to work. Unless I'm misunderstanding the implications of the first paragraph in the link you provided.
The very awesome twitter demo on the js.js site is one example of something you could no way do with narcissus except as a proof of concept in FF.
The pypy is faster than cpython (the default, official implementation of Python) despite being written in Python. It is not running in cpython or pypy enviroment though. It can, but it is really slow then. It's translated to C I think, using Python again, then compiled. I might be wrong. It is a very confusing project.
PS: Some "Python" in above sentences might actually be rpython. But as it is subset of python it probably doesn't matter.
> The pypy is faster than cpython (the default, official implementation of Python) despite being written in Python
This is far overreaching and as such very wrong. Also, "x language is faster than y language" is a meaningless comparison. Faster at what? In what cases? Pypy does some things faster than Cpython in some cases.
>PS: Some "Python" in above sentences might actually be rpython. But as it is subset of python it probably doesn't matter.
I don't know for sure that this is wrong, but it sure seems to be.
Well, faster or equal, eventually in everything. Anything that pypy does slower than cpython is considered a bug as far as a I know. But I was not making a definitive academic statement or anything, just saying that it is, in general, faster. After all, it is faster in mother of all benchmarks, the fib function :) (edit: on my computer)
As for the PS; RPython is statically typed Python (perhaps some other extra restrictions apply). Not type-annotated or anything, just pure python written in a certain way. So it is a subset. As for what "Python" should have been "RPython", it was the first one. The pypy interpreter is written in RPython. But my statement was still correct because RPython is Python.
What was the point of all of this? Why not just contribute sandboxing functionality to JavaScript itself? With a 200x slowdown, js.js is a total waste of time.
200x slowdown is only a problem if you're running a script that requires a lot of computation. js.js seems directed at things like ads and other widgets, which often are just doing a bunch of easy, boring DOM things and can in fact still function fast enough to not annoy the user, even with a big slowdown factor.
Also, as they say they just got this working. It seems reasonable to assume that with enough programming mojo, the slowdown could be something closer to 10X or even 5X.
The ads that use CPU are generally calling flash or some HTML5 canvas type of thing. Actually that's a good question, I wonder how js.js handles that sort of stuff?
Thus replacing vulnerabilities in your HTML sanitizer with vulnerabilities in an emscriptened spidermonkey, which you believe to be safer mostly because you are unable to do a reasonable code review of the resulting code :)
JS is very fast these days on all the modern browsers including ie9. There is plenty you could do.
However if you really want speed I wonder if you could bridge access to certain useful functions by delegating access to, for example, sandboxedAnimate = function (sel, attr, duration, easing) {} in a closure. If you want to allow plugins on your site for example, 200X for basic stuff and full speed for jquery animations, etc would be more than enough 99.9% of the time.
Also if you use CSS animations where there would be zero slowdown. So make that plenty fast 99.99% of the time.
Yeah, providing sandboxed and controlled access to certain areas of the dom and certain functions is exactly what they appear to be doing in their TW example. However, that tiny demo looks like it takes up 800 lines of code! WTF?
Yeah, I'm in no way proud of that beast of a code. There are a couple things at play here:
1. The js.js API is currently very low-level, which makes it verbose and difficult to use. There's a lot of room for improvement.
2. The twitter script was actually chosen because it's complex. It has a ton of boilerplate code that you'd probably be surprised is in there.
3. A lot of the code written could be generalized into a generic virtual DOM interface library that is not specific to this twitter script. Things like screen.width and screen.height are common properties that would be accessed by many different scripts and so could be generalized.
Kudos. This is very impressive. However, I think iframes + postMessage + xframe DOM/BOM abstraction is the more clean, reliable, usable, etc. way to go. Or defining a sandbox API for browsers that doesn't require iframes. But this could even be a nice first step towards that sandbox API.
Could a tracing JIT compiler optimize even this case, of a main loop thousands of lines long? I get the impression they've fallen from favour, but optimizing code regardless of how it's arranged in methods, seems a better, more general approach.
Even more evil would be to use JRuby and TheRubyRhino: run a JavaScript JavaScript implementation in a Java JavaScript implementation bridged to a Java Ruby implementation! To boot, the Rhino approach is probably a lot slower, since TheRubyRacer uses libv8.
[1] http://en.wikipedia.org/wiki/Narcissus_%28JavaScript_engine%...