Indeed, which comes with its own advantages and disadvantages. As someone that has to always care about performance, Machine Learning, I am very happy not to have to worry about dependencies not being compiled to take advantage of the latest and greatest CPU instructions on a specific box in a very heterogeneous cluster. Now, you could just compile it all on each deployment, but something as large as say TensorFlow takes ages and not everyone gets to enjoy having great devops at your beck and call.
In theory that can be solved with the best of both worlds: have the runtime and JIT in the final binary. That way you still get easy deployment with 1 big file and high performance after the JIT warms up, which should be fast.
I think it's just a problem of manpower on the Julia developer side.
I wholeheartedly agree, one could also note how this is Java-esque. My hope is that someone far more well-versed than me in this area will find the time and push for it. On that note, one wonderful thing about Julia is how easy it is to contribute to the core language since it is written in Julia. It really is amazing to see people from so many disciplines come together to push a “language for science as a whole”.
However, go to my original comment and replace "third party" with "it's not done yet and only experimental right now" and the meaning stays the same... :)
Indeed, but it shows that it has moved up a notch and people are taking it more seriously now. The biggest issue is that all of the compiler guys have thus far been working on making breaking changes to the language. Once v1.0 is out (alpha is already tagged! Features are frozen!), then they will have some open hands :).
https://medium.com/@sdanisch/compiling-julia-binaries-ddd6d4... -> notice that's a third party package.