The benchmarks seen at [1] which compare Julia with other languages were written to be idiomatic in those languages. I.e., they weren't supposed to be the fastest you could be in that language (which often would mean call a C library), but rather something which represents the language well. So the right criticism is not just if the benchmarks could be made faster, but also whether the speedup is too tricky or hard or requires calling wrapped C libraries.
Actually Chris it says on the website the benchmarks were "written to test the performance of specific algorithms, expressed in a reasonable idiom".
I took issue with how they had written some of the java, e.g. They wrote their own quicksort which was slower than just using Arrays.sort, the much more idiomatic way in java. I even submitted a PR which went nowhere: https://github.com/JuliaLang/julia/pull/14229 I then broke the code improvements into smaller PRs and am still waiting after 2 weeks for the first PR to be merged.
Hi Ryan, the point was not to use Java's built-in sort, but to implement a textbook quicksort implementation in all languages to see how the compiler performs. That is why the original PR was not merged.
On the smaller PR, I had requested fixing the mandel benchmark that in Java is doing lesser work than the Julia and Lua benchmarks, which gives it an unfair advantage. That should be easy enough to fix too - but I didn't get a reply.
Let's get it merged though, and continue the discussion on the PR.
[1] http://julialang.org/benchmarks/