Send + Sync are great. The downside of concurrency in Rust is:
1) There isn't transparent integration with IO in the runtime as in Go or Haskell. Rust probably won't ever do this because although such a model scales well in general, it does create overhead and a runtime.
2) OS threads are difficult to work with compared to a nice M:N threading abstraction (which again are the default in Go or Haskell). OS threads leads to lowest common denominator APIs (there is no way to kill a thread in Rust) and some difficulty in reasoning about performance implications. I am attempting to solve this aspect by using the mioco library, although due to point #1 IO is going to be a little awkward.
> 1) There isn't transparent integration with IO in the runtime as in Go or Haskell. Rust probably won't ever do this because although such a model scales well in general, it does create overhead and a runtime.
By "transparent integration with the runtime" you mean M:N threading. M:N threading is just delegating work to userspace that the kernel is already doing. There can be valid reasons for doing it, but M:N threading isn't us not doing the work that we could have done. In fact, we had M:N threading for a long time and went to great pains to remove it.
In addition to the downsides you mentioned, M:N threading interacts poorly with C libraries, and stack allocation becomes a major problem without a precise GC to be able to relocate stacks with.
M:N will never be as fast as an optimized async/await implementation can be, anyway. There is no way to reach nginx levels of performance with stackful coroutines.
> OS threads leads to lowest common denominator APIs (there is no way to kill a thread in Rust)
This has nothing to do with the reason why you can't kill threads in Rust. We could expose pthread_kill()/pthread_cancel() on Unix and TerminateThread() on Windows if we wanted to. The reason why you can't terminate threads that way is that there's no good reason to: if you have any locks anywhere then it's an unsafe operation.
> some difficulty in reasoning about performance implications.
I would actually expect the opposite to be true: 1:1 is easier to reason about in performance, because there are fewer magic runtime features like moving or segmented stacks involved. Could you elaborate?
Will kill a process. It has an isolated heap, so it won't affect other (possibly hundreds of thousands of) running processes. That memory will be garbage collected, safely and efficiently.
This will also work in Elixir, LFE and other languages running on the BEAM VM platform.
EDIT: masklinn user below pointed out correctly, the example is exit/2, that is exit(Pid,kill). In fact it is just exit(Pid, Reason), where Reason can be other exit reason, like say my_socket_failed. However in that case the process could catch it and handle that signal instead of being un-conditionally killed.
> This is called from within a threads execution, right? I think the question is about being able to kill a thread externally.
Yes the GP has the wrong arity, exit/1 terminates the current process but exit/2[0] will send an exit signal to an other process and possibly cause them to terminate (depending on the provided Reason and whether they trap exits).
This is a normal behaviour of Erlang which enables things like seamless supervision trees, where exit signals propagate through the supervision trees reaping all of the processes until they encounter a process trapping exits and a supervisor can freely terminate its children[1]
This can work because erlang doesn't have shared state[2][3], and BEAM implements termination signaling (so processes can be made aware by the VM of the termination of other processes)
It works correctly either way -- externally with exit(Pid,kill) or by the process itself as exit(kill). The last one is just a shorthand for exit(self(), kill). Where self() is the process id of the currently running process.
But the way you showed was not one in which anyone was interested, synchronous exceptions work in more or less every language, and you can't assume readers know your self-killing is actually implemented via asynchronous exceptions since they don't know the language.
Haskell has killThread, which rather than being an anti-pattern is often used as an effective way to accurately enforce a timeout on a thread. This functionality seems like it would be very difficult to achieve with most other runtimes.
https://news.ycombinator.com/item?id=11370004
Yes. In Haskell you use `killThread` which throws an asynchronous exception to the thread. It is certainly difficult to perfectly cleanup resources in the face of asynchronous exceptions. However, once there are functions available to help you with this (e.g. use a bracket function whenever using resources) it becomes tractable.
This functionality is critical to being able to timeout a thread.
Yes, there is still an issue with async exceptions when there is an exception during the cleanup handler of bracket. Probably this has not received the attention it deserves because fundamentally if a cleanup handler throws an exception you may well still have resource issues. But the article also proposes ways of solving this issue, so lets not give up on async exceptions.
Note that this isn't exactly a safe operation, since a killed thread may stop in the midst of something. It's safer to have it process messages on a loop and include a quit message.
Can you clarify what you mean by "kill goroutines?" Because my understanding was if you return while inside a goroutine it get's handled by the GC immediately, and (as someone else mentioned) you can use context to send deadlines/cancellation signals to go routines.
The ability to kill an arbitrary goroutine from the outside. To use context you need to write your specific goroutine such that it checks for cancellation and will eventually handle a cancellation request. This cannot be done with an arbitrary goroutine.
That's not a way to "kill goroutines". That's a way to "ask goroutines to die when they get around to it." Useful, but a fundamentally different thing. Go does not have a way to kill goroutines, nor, per some of the other discussion in this thread, do I ever expect it to.
Think about the interaction with (non-memory) resource ownership. This is just horrible, and I wouldn't even want it in a higher-level language. If you want to carefully notify threads that they must terminate, set up a channel, or write to a shared variable, but please do not just forcibly terminate threads.
Let me interpret the situation in managed languages from a Rust programmer's lens: All managed resources are actually owned by the runtime, and merely borrowed by your program. Thus, killing threads is “safe”: no managed resource can possibly become orphaned. The result is very pleasant as long as your program only uses runtime-managed resources. But things quickly become hairy when you want to use foreign libraries (typically written in C, or exposing C-compatible interfaces), because it's very difficult to arrange things so that cleanup routines are guaranteed to be called before your thread is killed.
> Would seem silly to use rust for all it's safety features to then call into c libraries, anyhow.
Why? One very important use case for Rust's unsafe sublanguage is making wrappers around C libraries that can be used in the safe sublanguage. If anything, using C libraries is more pleasant in Rust (or C++) than in managed languages, because you don't particularly need to accomodate or work around the idiosyncracies and quirks of a complex runtime system. That's pretty much the entire point of my above post (GP to this one).
> So much for Heartbleed wouldn't happen with rust.
You may not agree, but the position consistently taken by Rust is that, while avoiding unsafe code is highly desirable, it isn't always possible. What is always possible is to isolate unsafe code, so that, if the safety guarantees of the safe sublanguage are ever violated, you know what parts of your program to audit.
1) There isn't transparent integration with IO in the runtime as in Go or Haskell. Rust probably won't ever do this because although such a model scales well in general, it does create overhead and a runtime.
2) OS threads are difficult to work with compared to a nice M:N threading abstraction (which again are the default in Go or Haskell). OS threads leads to lowest common denominator APIs (there is no way to kill a thread in Rust) and some difficulty in reasoning about performance implications. I am attempting to solve this aspect by using the mioco library, although due to point #1 IO is going to be a little awkward.