On contrived benchmarks, sure. What if I want to, say, parse a json file and print something from it?
I understand that Julia is scientific computing oriented, and is probably faster than Python for those applications, but the fact is that Python is no slouch when it comes to scientific computing. And it can do a lot more, including simple but powerful scripts, which is what I mean when I say "most applications."
Why is doing actual science more of a contrived benchmark than parsing and printing a json? I think this says more about what you personally do than anything else.
Most code is that. N-body problems and computing the julia set are cool and beautiful and important. But most code is plumbing scripts that get run for .5s 10,000,000 times a day.
And if you're solving hard problems, it's fine too. That's my only point.
You'll see on that same benchmark page that C is 2-3x faster than Julia. If you want performance, use C. Julia is this weird middle ground where it has the simplified syntax of Python, is a little faster than Python, but still slower than C. Anything that needs to be done in real time will be optimized into a "real" language like C, C++ or Rust.
It's worth noting that the benchmarkgame is including startup time. If you look at the execution time (which is what matters once you start doing more work) the speeds are equal. For example, https://arxiv.org/pdf/2207.12762.pdf shows Julia beating hand codes BLAS kernels for the 2nd fastest supercomputer in the world.
I agree that if you keep increasing n on any of these benchmarks, Julia and C should start to approach each other, but the JIT overhead is not meaningless. I think there’s a reason benchmarkgame includes it.
It sounds, though, like they’ve started to seriously address this in versions more recent than what I’ve played with. I suppose I’ll check it out again.
I agree JIT overhead is not meaningless, but it's pretty odd that only some programming languages in the benchmark measure compilation time while others do not. If we really think it's not meaningless, then other languages (C, Fortran, etc.) should include that in the timing as well. Even better would be to have timings which include compilation and which do not. Then we would have a nice way of making a multi-dimensional comparison about the latency and runtime.
Currently, Julia's benchmarks add its compilation time while the building of the C binaries is not measured in its, so it's not a direct 1-1 comparison. And we don't have the numbers in there to really know how much of an effect it has either. More clarity would just be better for everyone.
I just showed you that it's possible to generate native code ahead of time but you ignored that. Now you've moved on to the "next" objection. Anyway, good luck with your life.
running `time julia +release --project=. --startup-file=no test.jl` gives a total time spend of 0.39 seconds (running on a dev version of Julia brings it down to 0.30). The translation of this into python is faster (.02 seconds), but this means that as long as your script has at least a second or so of work to do, Julia will be faster.
Specifically, the timing breakdown is 0.07 seconds to launch julia, 0.07 seconds to load JSON3, 0.0001 seconds to parse the file, .07 seconds of compilation for the indexing (I'm pretty sure this is fixable on the package side, see https://github.com/quinnj/JSON3.jl/pull/271), and 0.0001 seconds to do the indexing