I've never used Go, but it has some interesting concurrency ideas. So do Node and Erlang. I naively expect Rust to adopt the best ideas from each.
>One can use channels to get all four of the conditions required for a deadlock, especially Go's synchronous-by-default channels.
One can, yes. But it's pretty easy to avoid cyclic locking patterns when each request is handled by a separate goroutine, as is idiomatic. One thread per request in Rust will drag pretty quickly.
Yes, the borrow checker is an impressive achievement. But is it enough for Rust to succeed? Marketing yourself as safer C++ is what Java already did (with tremendous success) 20 years ago. And the market for systems languages has only shrunk since then (my phone runs Java).
>All the languages you mention are quite opinionated in their concurrency, imposing costs that Rust doesn't and can't, for its target space.
And yet Rust has already partially standardized channels. Finish the channels, add coroutines and you've implemented Go! (I'll note there are already coroutine implementations for C and C++, which do not limit their use as systems languages).
>(And, Go certainly doesn't provide any guarantees at all, not even data race freedom.)
Which, interestingly, hasn't hindered its ability to become a successful language! A lesson worth remembering.
>Wrong, it's very much on the roadmap, e.g. Alex Crichton (core team member) has been adding windows support to mio himself.
> I've never used Go, but it has some interesting
> concurrency ideas. So do Node and Erlang. I naively
> expect Rust to adopt the best ideas from each.
This is where your naivete shows. Rust originally did have the same thread model as Go baked into the language and standard library, and it labored for years to find a usable compromise between Go's green thread model and the native threading model. And a compromise is indeed necessary, firstly because we don't just need another Go, and secondly because Go's threading model imposes horrific costs when trying to interoperate with non-Go code (literally thousands of times the overhead that you'd expect). For a language like Rust that intends to interoperate with the native ecosystem, that overhead is unacceptable. After about three or four complete redesigns and rewrites the entire green threading infrastructure was chucked to the curb. Fortunately, Rust is low-level enough that libraries like mio can pick up the slack on their own, and in the meantime libraries that don't need green threads don't have to pay the price.
>Rust originally did have the same thread model as Go baked into the language and standard library
Ehhhhhh not quite. Go provides one threading API, and it's green threading. Rust tried to provide the both green threading and native threading using identical APIs. That was a unique and in retrospect quixotic decision. Most of the problems identified in the RFC stem from the unified API issue:
You're the third person in this thread to tell me that Go-like concurrency requires a big runtime, and it remains false. Here are analogous concurrency implementations in C and C++:
(Concurrency in C and C++ is also third-party libraries... The whole point of languages like Rust and C++ is that powerful functionality like this can be built externally, so that different trade-offs can be made. Languages like Go and Node force one approach, and so when you need something outside it, you're forced to do something suboptimal.)
Without the documentation, stability, portability, quality guarantees, and compiler support (that's a big one — code generation for coroutines needs to be good) of a standard library.
>Concurrency in C and C++ is also third-party libraries
C++ is on track to standardize concurrent file and network IO. Draft specifications have already been published, and Microsoft shipped coroutines in VS 2015. It would be a damn shame if C++ got concurrent IO before Rust.
I would like to use Rust professionally, and I'm sure you do/would as well. But no one can possibly sell their boss on using a project with a single part time maintainer to provide critical functionality.
The C/C++ libraries you're holding up as examples get no compiler support.
That said, I do think a much better style of coroutines for Rust would be a C#-esque async/await transformation, converting stack-frames/local variables into an enum, allowing literally zero-cost coroutines (all the state is stored inline, no need to allocate a separate stack). Relevant issues:
I'm pretty sure this is quite non-trivial to implement automatically.
---
C++ has had 20 years of stability, Rust only 3 months. Rust will get concurrent IO before C++ has on that time scale.
The goal with 1.0 was to stabilise enough of the language that people can start using it to write libraries that work into the forseeable future, allowing them to seriously explore the space of, for example, concurrent IO in Rust. Once enough exploration has been done (maybe you think enough has been done for async IO now), the functionality can start to become more official.
>The C/C++ libraries you're holding up as examples get no compiler support.
You can open VS 2015 today and use C++ coroutines backed by Microsoft (and their compiler, which is developed alongside their standard library).
And I am by no means saying that Goroutines are the final story in concurrent IO. Stackless coroutines in Rust would be a dream.
>C++ has had 20 years of stability, Rust only 3 months. Rust will get concurrent IO before C++ has on that time scale.
Concurrent IO is a hell of lot more important than it was in the 1990s, and the relative timescale is irrelevant for people choosing between Rust and C++ today (or Go, Scala, Clojure, C#, etc.).
>Once enough exploration has been done (maybe you think enough has been done for async IO now),
Exactly the opposite — I think the number of developers working on this (the Mio author plus Alex Crichton, maybe some offshoots) is far too few.
And the attitude I'm seeing from some core developers in this thread (concurrent IO is a "pet" feature that the community will someday deliver fully formed and ready for "blessing") is a huge disappointment.
Rust is targeting more than one domain. Concurrent IO is not important in all domains. I'm sure the domains you work in need it a lot, but that's not the whole world. (This is all I mean by "pet feature".)
There are cross-cutting concerns that apply to everything (including concurrent IO libraries) that development work is focusing on. Rust doesn't need to eat Go's/Scala's/Clojure's... lunch 3 months after it was released, taking a year or two to settle in and branch out seems fine to me.
>Rust is targeting more than one domain. Concurrent IO is not important in all domains.
Well, what domains is Rust targeting? Rust still isn't a good fit for embedded/kernel stuff without allocators and OOM handling (not to mention every architecture not targeted by LLVM).
That leaves userland applications, and how many applications don't need concurrent IO?
I'm definitely being impatient, and I'm sorry for that. It's just frustrating to see only one core member working on it.
The standard library is different to the language itself: the language certainly doesn't disallow allocators/OOM handling, it's just the design of std. Furthermore, the standard library is layered: there's `std` with various OS-required routines (IO, etc.) and `core` that is the core stuff that doesn't require any of that. Operating systems/embedded applications can generally use `core`. However, even that is better than C/C++, where one usually has a from-scratch "standard" library (which is possible to do with Rust too): not being able to use the compiler-bundled `std` doesn't seem like a point against Rust.
Many user-space applications don't do heavy network work, which is where concurrent IO is most necessary. E.g. games or a scientific simulation, even web-browsers don't need of concurrent IO (they're not trying to juggle thousands of connections).
I've never used Go, but it has some interesting concurrency ideas. So do Node and Erlang. I naively expect Rust to adopt the best ideas from each.
>One can use channels to get all four of the conditions required for a deadlock, especially Go's synchronous-by-default channels.
One can, yes. But it's pretty easy to avoid cyclic locking patterns when each request is handled by a separate goroutine, as is idiomatic. One thread per request in Rust will drag pretty quickly.
Yes, the borrow checker is an impressive achievement. But is it enough for Rust to succeed? Marketing yourself as safer C++ is what Java already did (with tremendous success) 20 years ago. And the market for systems languages has only shrunk since then (my phone runs Java).
>All the languages you mention are quite opinionated in their concurrency, imposing costs that Rust doesn't and can't, for its target space.
And yet Rust has already partially standardized channels. Finish the channels, add coroutines and you've implemented Go! (I'll note there are already coroutine implementations for C and C++, which do not limit their use as systems languages).
>(And, Go certainly doesn't provide any guarantees at all, not even data race freedom.)
Which, interestingly, hasn't hindered its ability to become a successful language! A lesson worth remembering.
>Wrong, it's very much on the roadmap, e.g. Alex Crichton (core team member) has been adding windows support to mio himself.
I mean... https://github.com/carllerche/mio/graphs/contributors?from=2...
Could be worse, could be better?
>Why is this particular pet feature any more important than everyone else's pet feature?
I don't think "good concurrency is a pet feature" is the winning argument here.
>I'm pretty confident that Rust can easily be much better (i.e. more performant and reliable) than both Node and Go and even Erlang.
Me too! But I'm not sure that's going to happen with a single core developer on Mio and no timeline for standardization.