Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's worth noting that the benchmarkgame is including startup time. If you look at the execution time (which is what matters once you start doing more work) the speeds are equal. For example, https://arxiv.org/pdf/2207.12762.pdf shows Julia beating hand codes BLAS kernels for the 2nd fastest supercomputer in the world.


I agree that if you keep increasing n on any of these benchmarks, Julia and C should start to approach each other, but the JIT overhead is not meaningless. I think there’s a reason benchmarkgame includes it.

It sounds, though, like they’ve started to seriously address this in versions more recent than what I’ve played with. I suppose I’ll check it out again.


I agree JIT overhead is not meaningless, but it's pretty odd that only some programming languages in the benchmark measure compilation time while others do not. If we really think it's not meaningless, then other languages (C, Fortran, etc.) should include that in the timing as well. Even better would be to have timings which include compilation and which do not. Then we would have a nice way of making a multi-dimensional comparison about the latency and runtime.

Currently, Julia's benchmarks add its compilation time while the building of the C binaries is not measured in its, so it's not a direct 1-1 comparison. And we don't have the numbers in there to really know how much of an effect it has either. More clarity would just be better for everyone.


That's because, until recently, you compiled every time you ran with Julia. It's not the case with C.


It has NEVER been the case that you have to compile a function every time you run it.


Leave him alone. He already made up his mind, you're just confusing him with facts.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: