Hacker News new | past | comments | ask | show | jobs | submit login

Yes. I'm not comming up with an excuse.

Other java script engines, like the one in web kit, minimize the amount of analysis they do of java script source, in order to avoid extra overhead. Something like an optimizing compilation pass is generally too slow to be done online. It would delay page load time considerably.

But, if it could be done off line, operating on cached, frequently used pages, it could improve runtime considerably.

If one were to implement such a system for js, it would make sence to use file hashes as keys to the precompiled code index, and fall back on slower methods for cache misses, until such time as the offline process could compile the code. Small changes (non white space), like the ones in the diffs, would trigger hash changes.

Given such a system, precompiling the benchmark is not cheating. My point is that you are confusing necessary with sufficient conditions, and are making damning conclusions without proper evidence.




Ok, so your hypothesis is that this benchmark is fairly frequently executed, so that it's reasonable to think that a precompiled version is stored somewhere?

In that case, to avoid the accusation of cheating, the choice of precompiled code should have an algorithmic basis : For instance, something akin to Alexa rank of the .js at various CDN. That would make sure that JQuery would be precompiled, which could well be rational.

But I seriously doubt that such an objective method would include this benchmark code in the IE precompiled payload...


If they have the ability to precompiled JS code, they would, of course, precompile the benchmark. Why would you run a benchmark in "slow" mode if you had a fast mode available? There's nothing wrong with precompiling the benchmark.

I'm not saying that's what they are doing, because I don't know. I'm saying that the conclusion of cheating is unfounded.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: