Hacker News new | past | comments | ask | show | jobs | submit login

"gradual typing" is exactly how I code Julia.

Disclaimer: this is not how to the strengths of Julia are normally described by most people, it's just how I think about it.

You might have heard that Julia solves the two language problem (easy as python, fast as C++). But exactly how does it do that? In python you don't have to care about types, but even if you were willing to care about types you wouldn't get any performance benefit. In C++ you have to care about types, you don't have the option not to, even when you don't care about performance.

In Julia you don't have to care about types. You can code it exactly as python. But if you are willing to you can care about types, and you'll get a huge performance benefit. So, the way I code Julia is on a first pass I code it like python. I don't care about types, I just care that it's correct. Then, after it's correct, and if performance is critical, I start thinking about types. So Julia very much lends itself to "make it work, make it right, make it fast", by allowing you to gradually introducing types into the parts of your code that matter.




The way you use the word "correct" is interesting: in PL theory circles, it usually means : "bug free", but it appears that you use it to mean: "produces the results I'm looking for".

Indeed, one may write a program which is bug free, yet does not implement the algorithm that produces the expected result (for instance, a program sorting data in ascending order, when a descending order is needed). In strongly typed languages, type systems are used to ensure that programs are bug free, following the adage : "if it compiles, it works". The issue of having a proper implementation is not a concern of researchers in their papers, it's purely an engineering problem, so there is no interest for them in using the word correct in that sense.

Also, with type inference, one often does not have to mind too much about types, and instead gradually introduce type annotations to lift disagreements between the compiler and its user.

Gradual typing is yet another tool to achieve a similar result.


> In strongly typed languages, type systems are used to ensure that programs are bug free, following the adage : "if it compiles, it works".

(chokes on his latte) - could you elaborate on this, cos, much as I love strong typing, it surely don't mean 'bug free' at the end, not in any way useful sense.


You make a valid point: I used the word "bug" here without trying first to define it, and that led to confusion in my mind, and thus in my comment.

In that comment, I used the example of programs performing an ascending sort or a descending one. While both programs would be valid, one of them, at least, would not correspond to the intent of the programmer. From an engineering perspective, that would be considered a bug.

I guess an informal yet hopefully apt definition of the word bug could be "an error in the source code of a program leading to an incorrect behaviour of that program at runtime". That incorrect behaviour can take many forms: the most obvious one is the program breaking in the middle of a computation, without providing any result (a semantic bug). A second one is when the compiler will not accept the program as valid (a syntactic bug). A third one is the program producing the wrong result, as in my example (also a semantic bug).

In my mind, I only considered the first 2 kinds of errors as bugs, and qualified the last as something else, perhaps for the reason that only the programmer may really know the intent behind a program's implementation.

One of the goals when designing a programming language is to help reduce the occurrence of bugs. The main strategy is to turn bugs of the first kind (semantic) into bugs of the second kind (syntactic), or at least that is my understanding.

Concretely, with a powerful enough type system, one may express expected properties of a program using the provided syntax, and let the compiler validate these claims using only formal rules.


This is a slightly above my pay grade, so take it with a bit salt.

Type systems in general reject some programs as invalid, but this claim is usually made with powerful type systems in mind; think Haskell, Idris, Rust. I only played with these casually, but it is true that you can write some code, and then let yourself be guided by compiler in fixing all the errors. It's often the case that the resulting program just works.

Rust's borrow checker is a part of its type system, based on some flavor of linear types if I remember correctly. Think about it: the type system can protect you from double frees. This is powerful.

In Haskell, for starters, you can't do any side effects outside of IO monad. You can express precisely what a given function is allowed to do.

In dependently-typed languages you can express that, for example, a function concatenating lists of length n and m returns a list of length n + m.

And aside from these above there is a lot of other, less magical things: no nulls, exhaustiveness checks for ADTs, some focus on immutability, lack of exceptions, and so on.

Now, I doubt we are all going to write Idris in five years, and even there bugs do happen obviously. But the idea compiles==works is not entirely out of this world.


Thank you, that's exactly what I had in mind when speaking about strongly typed languages. I don't think I could have presented it in a better way.


Julia is slower than Python for most applications if you include the compile time (which is every time you run your program, because it's "just in time").

edit: I want to be clear - I like Julia, and I have long wanted a scripting language that was gramatically simple like python but had support for strong typing, etc. But the TTFX problem, for me, muddies the waters on the question of "which is faster, Julia or Python?"


Maybe if your program is just println("hello world!")

Julia also has done a lot lately[1] to cache the results of JIT compilation from packages and re-use it later. So if you're doing a lot of println("hello world!") or whatever, you can make that faster by bundling it in a package and adding a precompile workflow. This will also be improving more with upcoming releases.

[1] https://julialang.org/blog/2023/04/julia-1.9-highlights/#cac...


This is not true https://benchmarksgame-team.pages.debian.net/benchmarksgame/... shows the middle of the pack julia implementations being over 10x faster than the fastest python implementations including startup and compile time.


On contrived benchmarks, sure. What if I want to, say, parse a json file and print something from it?

I understand that Julia is scientific computing oriented, and is probably faster than Python for those applications, but the fact is that Python is no slouch when it comes to scientific computing. And it can do a lot more, including simple but powerful scripts, which is what I mean when I say "most applications."


> contrived benchmarks

Why is doing actual science more of a contrived benchmark than parsing and printing a json? I think this says more about what you personally do than anything else.


Most code is that. N-body problems and computing the julia set are cool and beautiful and important. But most code is plumbing scripts that get run for .5s 10,000,000 times a day.


Sure, if you're solving easy problems python will be fine.


And if you're solving hard problems, it's fine too. That's my only point.

You'll see on that same benchmark page that C is 2-3x faster than Julia. If you want performance, use C. Julia is this weird middle ground where it has the simplified syntax of Python, is a little faster than Python, but still slower than C. Anything that needs to be done in real time will be optimized into a "real" language like C, C++ or Rust.


It's worth noting that the benchmarkgame is including startup time. If you look at the execution time (which is what matters once you start doing more work) the speeds are equal. For example, https://arxiv.org/pdf/2207.12762.pdf shows Julia beating hand codes BLAS kernels for the 2nd fastest supercomputer in the world.


I agree that if you keep increasing n on any of these benchmarks, Julia and C should start to approach each other, but the JIT overhead is not meaningless. I think there’s a reason benchmarkgame includes it.

It sounds, though, like they’ve started to seriously address this in versions more recent than what I’ve played with. I suppose I’ll check it out again.


I agree JIT overhead is not meaningless, but it's pretty odd that only some programming languages in the benchmark measure compilation time while others do not. If we really think it's not meaningless, then other languages (C, Fortran, etc.) should include that in the timing as well. Even better would be to have timings which include compilation and which do not. Then we would have a nice way of making a multi-dimensional comparison about the latency and runtime.

Currently, Julia's benchmarks add its compilation time while the building of the C binaries is not measured in its, so it's not a direct 1-1 comparison. And we don't have the numbers in there to really know how much of an effect it has either. More clarity would just be better for everyone.


That's because, until recently, you compiled every time you ran with Julia. It's not the case with C.


It has NEVER been the case that you have to compile a function every time you run it.


Leave him alone. He already made up his mind, you're just confusing him with facts.


LOL no, it's not a little faster than python :D

I just showed you that it's possible to generate native code ahead of time but you ignored that. Now you've moved on to the "next" objection. Anyway, good luck with your life.


with the first example on https://www.json.org/example.html and the following Julia code

using JSON3

data = JSON3.read("test.json")

println(data[:glossary][:title])

running `time julia +release --project=. --startup-file=no test.jl` gives a total time spend of 0.39 seconds (running on a dev version of Julia brings it down to 0.30). The translation of this into python is faster (.02 seconds), but this means that as long as your script has at least a second or so of work to do, Julia will be faster.

Specifically, the timing breakdown is 0.07 seconds to launch julia, 0.07 seconds to load JSON3, 0.0001 seconds to parse the file, .07 seconds of compilation for the indexing (I'm pretty sure this is fixable on the package side, see https://github.com/quinnj/JSON3.jl/pull/271), and 0.0001 seconds to do the indexing


What does "most applications" mean?


Most code that is written is code that is run frequently and with a short runtime. Julia is not good at this.


You compile your code every time you run it? Yikes. Sorry to hear that.


Is there a way to get around that in Julia? I tried to find a way to compile programs directly, and was disappointed with what was on offer.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: