Hacker News new | past | comments | ask | show | jobs | submit login

It is going to depend partly on the internals of your JIT. Since JS objects are so mutable it's common to use their shape as the basic check for fast method dispatch. So if you take a Foo and add a property x then it will become a Foo(x). If you make lots of calls on objects of this shape then the method lookups will make it into inline caches and be very quick, calls against an object of shape Foo() or Foo(y) or Foo(x,y) will be treated separately. Inline caches for method dispatch are generally very small, so you'll omly get really fast dispatch for a few different shapes at any particular call site so the more consistent the shapes the more likely you are to keep the JIT happy.



So your argument is that we should currently and in the future constraint our usage of language-features based on the current state of language-runtimes?

How are we going to keep pushing the boundaries then? Had JS started with that sentiment, we would have a V8 JS runtime running about 100th the speed it does now for general purpose code, written with the aim of being as clear and reusable as possible.

Personally I prefer good, clear and concise code, over code written trying to appease some secondary affects on the inside of a black box which is not guaranteed to stay the same.

This is clearly an optimization, and I'm not saying all optimization is bad per se, but all optimization should be justified if it affects code-clarity. Are you sure these optimizations are warranted? Have you profiled your code and identified that this is a bottleneck for performance-critical sections of your code?

A former Nvidia-engineer had a really good rant about what sort of things your thinking leads to:

http://www.gamedev.net/topic/666419-what-are-your-opinions-o...

Key quote: "So the game is guessing what the driver is doing, the driver is guessing what the game is doing, and the whole mess could be avoided if the drivers just wouldn't work so hard trying to protect us."

I agree it's not a direct analogue, but the same line of thinking still applies.


My argument is that you should be aware of what is required to implement a language (at least to some extent), and what may go against implementation assumptions. These things are always a tradeoff, and if you mutate objects enough through their lifespan then maybe implementations will move more towards looking at the methods attached to an object, and ignore the shape of the data except when accessing a property (the methods are already normally stored off to one side on the assumption that you won't change them), but that will either make objects larger (need to store a ref to the shape and to the methods per object) or require an extra indirection (need to follow from the shape to the methods) so may incur a slight overall performance penalty.

Yes, we are all in the same game as GL driver developers and game devs, and that goes for all language implementations. The fine details of what gets optimised best may change from VM to VM, and from release to release, but it's important to understand the general assumptions that these systems are built around, because they don't change quickly.

In general dynamic languages that allow properties to be created and deleted on individual objects are a complete PITA to optimise, and more generally mutability of classes makes stuff harder, so in general even though you have these abilities in JS and other languages your code will often perform better if you limit your use of them, especially in areas where performance is critical. This doesn't mean you shouldn't use those mutability features, but you should have some notion of the cost you might be incurring.


Thanks. That explained it really well. Semantically it makes sense to me, but I didn't take the interpreter into account.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: