Hacker News new | past | comments | ask | show | jobs | submit login

Can you explain how you would optimise an expression like "a + b" at compile-time when the types are ambiguous? Surely you have to fall back to generic code, which is slower than a JIT that can observe that both types are e.g. a number and emit the ideal instructions for it.



if I were to make a bet, I'd say that they don't implement the whole JavaScript and that there is some sort of a static type system which either rejects some programs where types can't be inferred or causes performance to fall of the cliff where types can't be inferred.

so essentially it's likely "JavaScript-like compiler" rather than "JavaScript compiler"... similar to how Crystal is Ruby-like but not really Ruby.


I would definitely bet that eval() is unsupported. And Function(body). Probably with(){}.

However, apart from that, I don't see what can't be done: you can simply look at what functions get called, with what parameters, and generate code for each fundamental type (doubles, strings, …) used. It's like making C++-style templates for all function parameters.

The flip side is awful compilation times on large projects, and (I expect) poor results on maths for which JS VMs detect small ints.

I actually wanted to do something like this as a follow-up from my experiments with JS type inference, but I lacked time…


I imagine the big problem is anything depending on user input, which is unpredictable. Then the unpredictability probably propagates a long way through the program.

A simple example would be something like `x = (isUserOnMars() ? "foo" : 1)`. The type of `x` depends on if the user is on Mars. In practice they never will be, but a compiler can't tell that and so must consider that case and make code appropriately generic. A JIT however can see that it always appears to be false, and optimise around `x` tending to be a number, with bailout if it's wrong (which it likely never will be). Then the use of `x` may propagate a long way through the program in all sorts of places, extending the effect.


Is there any reason the compiler can't just emit optimized code for both the string and int cases?


You're going to need to bias in one direction or another. If you can safely inline assumptions regarding the int into the hot path that exists, with a consciously slower bail-out path if it's not an integer. Now consider a situation where `x` is probably an integer--but can be a string or null. We're rapidly getting to a situation where this is going to be puffy code that is hard to constrain to an effective hot path without running the code, yeah?

Profile-guided optimization seeks to do this for compiled languages like C++, but you also have a boatload more data to do it with. And, y'know. You're profiling the application. You're running it.


Depending on the ground rules that you set, I don't think eval() or Function() would be that bad considering they both evaluate in the global scope and you're working with strings in the first place.

My first guess for the chopping block would be the functionality of franken-objects like Function.arguments (especially properties like arguments.callee).


You can generate code for all type combinations for a and b. This will blow-up compiled code. It's a classic space-time trade-off.

Moreover, you can carry out heavy static analysis that can often narrow down the possible types for a and b. The main problem with this is that static analysis that is good enough will run very slow, hence it not an option for the web-browser.


You only need to generate code if it's being used, which means you can trade RAM (for a larger compilation unit) to reduce the number of (different types of) inputs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: