That is absolutley false. Look closely at the 'benchmarks' shown in the above article and tell me if they are at all compelling. They measure Julia's startup time and compilation time. IF you wanted to compare Julia to C there, you'd want to include the time it takes to compile your hello_world script.
Julia's performance optimizations have to far been mostly focused on intensive numerical computations where the startup time and compilation time are merely a constant overhead that are irrelevant to high performance numerics.
I don't have to recompile a julia function every time I run it so long as I don't close my julia session. If I do need to close and reopen my julia session a lot for some reason, I'd just statically compile the function. In practice, one rarely needs to close a julia session and recompile functions and if one does do it occassionally, the compile time while a little annoying is not too bad.
Compile time will also be actively worked on to reduce it post 1.0 once all the breaking changes are done and pressing bugs are squashed.
No, it's not. My comment is unrelated to startup and the issue in this article but more generally about Julia as a language of failed promises.
The biggest issue with the standard Julia benchmarks is that they're not using what would be optimal C code to compare against.
In addition, if you look at the Julia issues on GitHub, you'll find hundreds of performance regressions where code performs more than 10 times as slow as what they expected/claimed at one point. It's not reliably fast, even when written by Julia experts/developers.
> The biggest issue with the standard Julia benchmarks is that they're not using what would be optimal C code to compare against.
Yes, the C code benchmarks are not optimal. Neither are the julia benchmarks or any of the other languages for the matter. The julia devs made a hard decision with those benchmarks and decided that if they allowed arbitrary optimization, the benchmarks would become more a measure of who spent the most time and knowhow writing the benchmarks for ______ language. Instead, they tried to keep the code for all the languages at a reasonable level and avoided super specialized magic. That may make some uncomfortable, but keep in mind that the julia code used in the benchmarks also has a lot of room for improvement. Some julia devs are absolute wizards and getting performance out of julia code if you let them go crazy.
> In addition, if you look at the Julia issues on GitHub, you'll find hundreds of performance regressions where code performs more than 10 times as slow as what they expected/claimed at one point. It's not reliably fast, even when written by Julia experts/developers.
Julia 0.7 (which is still in its alpha build by the way) included a ground-up replacement of julias iteration protocol and a reworking of a ton of code optimization routines. If you think you or anyone else in the world could take a codebase as large as julia's and replace fundamental parts of it without seeing performance regressions anywhere you're delusional.
There are a number of performance regressions, some mysterious and some not mysterious and they will all be worked on because the julia devs take regressions very seriously. It won't be instantaneous, but I do not doubt that the bulk of them will be eliminated promptly.
There are also orders of magnitude more performance improvements in 0.7-alpha than there are regressions, they just aren't filed as issues. Some of these performance improvements, especially with broadcasting, are state of the art and not seen in other languages.
Calling julia a "language of failed promises" because the alpha build of a pre 1.0 version has some performance regressions is sensationalist and disingenuous.
This is ridiculous. On top of eigenspaces comment, the promise was never that any odd Julia code would be fast. You can write untyped/dynamic code and that will be faster than Python but not near C, or you can put a little work into ensuring type stability (from experience this is easier than writing in C from the start, especially as you already have a working prototype that you are iterating in the same language) and you get excellent results.
I was very careful to state this the people writing the code I was talking about were Julia experts/developers, not just anyone writing arbitrary Julia code.
When you're doing that much typing, you might as well write C or C++.
Okay, with this comment your comment on preferring C++(11?) and pybind11 [1] I think I am finally getting your angle. Let’s see if I can bridge the perspectives.
If I am reading you correctly, you are a library builder, an infrastructure creator, and comfortable caring about the bottom line when it comes to performance. You most likely prototype an algorithm in some high-level language, ensuring that it works, then push it down into C++ in order to make it scale to cool problems, lastly you may create bindings to a higher level programming language so that others without your C++ acumen can benefit from your labour. A lot of great code is written this way, some that I rely on in my day-to-day work would be OpenBLAS, TensorFlow, and PyTorch.
I can only really speak for myself, but I know that there are many like me, we are researchers and to us code is only incidental. We are judged based on our ability to churn out as many papers and results as possible, in as little time as is humanly possible (ever wondered why academic code can be absolutely awful?). We rarely know the structure of the solution a-priori, rather, we start throwing techniques at things and try to make the experiments run. At some point, a (wild?) performance bottleneck appears and we just want to get around it as soon as possible. Now, some like myself have been in the Python/Cython world for years and you can get around a lot of bottlenecks this way. However, it comes at the expense of additional boilerplate and mastering which parts of the Python programming model you must throw overboard, not to mention how you make your Cython code interact with pure Python code from libraries that others have written. This is where Julia shines, it allows you to much easier go between this “productive” and “performance” mode, to me, this is worth its weight in gold. It is not for everyone, but if I ever have a law named after me I would be happy if it was “Nothing is for everyone”.
You’re exactly right. Thank you for bridging the gap in perspectives. I used to do a lot in cython but found that the glue code was taking more effort than writing the whole application in C++.
Glad that someone managed the translation. Another note: I'm a trained mathematician, I'm not afraid of a type system (nor are the physicists I work with). In fact I was missing one dearly in Python. If that was all that was needed to write C++ instead of Python we all would.
Yet that's how Julia looks to us: Python + a sensible Type system.
I can see how one might naïvely think that, but in my experience and the experience of everyone I’ve talked to who uses julia, gradually improving the performance in your bottlenecks using only Julia is much nicer that just working in C or dropping down from python to Cython.
You can improve much more gradually as needed, retain all the full language features and have much less menta overhead, needing only to keep Julia in your head.
Julia devs make performance regressions when replacing critical central code components just like everyone else does. The advantage is that julia makes it easier to reason about and fix those regressions
I take it from your GitHub profile that you're an experienced C++ programmer, in which case you're probably correct.
Personally, I still find C++ to be effectively a black box, meaning that if I want to understand it or make changes, I'm at the mercy of the maintainers or willing colleagues.
Julia's performance optimizations have to far been mostly focused on intensive numerical computations where the startup time and compilation time are merely a constant overhead that are irrelevant to high performance numerics.