Never mind TraceMonkey -- the JavaScriptCore kids rallied in 2008 and did SquirrelFish Extreme, which held its own going into late 2008.
The chosen measure of these new VMs was a set of benchmarks, SunSpider from Apple and the V8 Benchmarks from Google. While V8 had the best GC and most optimizations, on these suites at least, for about two weeks for TraceMonkey (and longer for SFX), V8 was not that far ahead.
You can find the charts via Google still.
V8 had the longest lead time, not just working on what was released with Chrome but trying other approaches first, learning from them, and starting over. That's huge and it has paid off well.
But I don't agree that any architectural failing of one VM counts more than public, reproducible benchmark scores. Even V8 had to do Crankshaft.
Architectures evolve and supersede one another, but the developer and user benefit -- the public benefit -- comes from the competition. V8 was not alone in driving competition.
The chosen measure of these new VMs was a set of benchmarks, SunSpider from Apple and the V8 Benchmarks from Google. While V8 had the best GC and most optimizations, on these suites at least, for about two weeks for TraceMonkey (and longer for SFX), V8 was not that far ahead.
You can find the charts via Google still.
V8 had the longest lead time, not just working on what was released with Chrome but trying other approaches first, learning from them, and starting over. That's huge and it has paid off well.
But I don't agree that any architectural failing of one VM counts more than public, reproducible benchmark scores. Even V8 had to do Crankshaft.
Architectures evolve and supersede one another, but the developer and user benefit -- the public benefit -- comes from the competition. V8 was not alone in driving competition.