Hacker News new | past | comments | ask | show | jobs | submit login

I'm super excited about this! For me the most exciting part is these four sentences:

> The model is portable and free from low-level details. You don’t need to explicitly start and stop threads, and you don’t even need to know how many processors or threads there are (though you can find out if you want).

> The model is nestable and composable: you can start parallel tasks that call library functions that themselves start parallel tasks, and everything works. Your CPUs will not be over-subscribed with threads.

It means that you can put threaded parallelism inside libraries without worrying. It means that Julia itself will start making its builtins threaded. It means that Julia itself is much more threadsafe by default. It means that packages can implement threaded algorithms — and you can use multiple threaded packages together at the same time without worrying. And it means that you can put threaded parallelism in your application code and all these things will work together beautifully.




still have to be careful about data races and thread synchronization, though:

    julia> xs = [1]
    1-element Array{Int64,1}:
     1
    
    julia> for j in 1:1024
            d = @spawn xs[1] += 1
           end
    
    julia> xs
    1-element Array{Int64,1}:
     686


Well, only if your code and your libraries do not share state.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: