There is literally no isolation between what is being tested and how much performance goes into unrelated operations.
Example: The test for the bind operation executes the "ul > li" selector before performing the bind, thus the results are affected by this.
I could be wrong, but it seems like a benchmark specifically tailored to the benefit of RightJS and it provides no indication for overall performance of RightJS based applications.
RightJS might be wicked fast for all I know, but this test suite is not going to convince me.
Write your own specific tests if you suspect bias.
I doubt there is any real bias though, js libs have different aims. Some aim to have complete browser support at the expense of speed+size, some assume a certain level of browser, and so come in much faster.
The fastest lib though is always going to be no lib. It's far easier to optimize your own code.
The fastest lib though is always going to be no lib.
This is only true if you already know all of the performance hacks in each browser. By using a library, you can outsource the need to worry about that to a third party. Also, only about 5% of your code is ever going to underperform, so deciding to not use a library means you're wasting time on the other 95%.
http://github.com/MadRabbit/taskspeed/blob/master/tests/jq-t...
There is literally no isolation between what is being tested and how much performance goes into unrelated operations.
Example: The test for the bind operation executes the "ul > li" selector before performing the bind, thus the results are affected by this.
I could be wrong, but it seems like a benchmark specifically tailored to the benefit of RightJS and it provides no indication for overall performance of RightJS based applications.
RightJS might be wicked fast for all I know, but this test suite is not going to convince me.