Hacker News new | past | comments | ask | show | jobs | submit login

>Rust originally did have the same thread model as Go baked into the language and standard library

Ehhhhhh not quite. Go provides one threading API, and it's green threading. Rust tried to provide the both green threading and native threading using identical APIs. That was a unique and in retrospect quixotic decision. Most of the problems identified in the RFC stem from the unified API issue:

https://github.com/rust-lang/rfcs/blob/0806be4f282144cfcd55b...

I also understand Rust's implementation was backed by libuv, which, being designed for Node, was a poor fit for Rust:

https://plus.google.com/+nialldouglas/posts/AXFJRSM8u2t

You're the third person in this thread to tell me that Go-like concurrency requires a big runtime, and it remains false. Here are analogous concurrency implementations in C and C++:

http://libmill.org/tutorial.html

http://www.boost.org/doc/libs/1_59_0/libs/coroutine2/doc/htm...

And because of that fallacy, the future of concurrency in Rust is a one-man show, third party library. It's a tremendous loss.




The same approach can and does exist in Rust too, e.g. https://crates.io/crates/mioco .

(Concurrency in C and C++ is also third-party libraries... The whole point of languages like Rust and C++ is that powerful functionality like this can be built externally, so that different trade-offs can be made. Languages like Go and Node force one approach, and so when you need something outside it, you're forced to do something suboptimal.)


>The same approach can and does exist in Rust

Without the documentation, stability, portability, quality guarantees, and compiler support (that's a big one — code generation for coroutines needs to be good) of a standard library.

>Concurrency in C and C++ is also third-party libraries

C++ is on track to standardize concurrent file and network IO. Draft specifications have already been published, and Microsoft shipped coroutines in VS 2015. It would be a damn shame if C++ got concurrent IO before Rust.

I would like to use Rust professionally, and I'm sure you do/would as well. But no one can possibly sell their boss on using a project with a single part time maintainer to provide critical functionality.


The C/C++ libraries you're holding up as examples get no compiler support.

That said, I do think a much better style of coroutines for Rust would be a C#-esque async/await transformation, converting stack-frames/local variables into an enum, allowing literally zero-cost coroutines (all the state is stored inline, no need to allocate a separate stack). Relevant issues:

- https://github.com/rust-lang/rfcs/issues/388

- https://github.com/rust-lang/rfcs/issues/1081

I'm pretty sure this is quite non-trivial to implement automatically.

---

C++ has had 20 years of stability, Rust only 3 months. Rust will get concurrent IO before C++ has on that time scale.

The goal with 1.0 was to stabilise enough of the language that people can start using it to write libraries that work into the forseeable future, allowing them to seriously explore the space of, for example, concurrent IO in Rust. Once enough exploration has been done (maybe you think enough has been done for async IO now), the functionality can start to become more official.


>The C/C++ libraries you're holding up as examples get no compiler support.

You can open VS 2015 today and use C++ coroutines backed by Microsoft (and their compiler, which is developed alongside their standard library).

And I am by no means saying that Goroutines are the final story in concurrent IO. Stackless coroutines in Rust would be a dream.

>C++ has had 20 years of stability, Rust only 3 months. Rust will get concurrent IO before C++ has on that time scale.

Concurrent IO is a hell of lot more important than it was in the 1990s, and the relative timescale is irrelevant for people choosing between Rust and C++ today (or Go, Scala, Clojure, C#, etc.).

>Once enough exploration has been done (maybe you think enough has been done for async IO now),

Exactly the opposite — I think the number of developers working on this (the Mio author plus Alex Crichton, maybe some offshoots) is far too few.

And the attitude I'm seeing from some core developers in this thread (concurrent IO is a "pet" feature that the community will someday deliver fully formed and ready for "blessing") is a huge disappointment.


Rust is targeting more than one domain. Concurrent IO is not important in all domains. I'm sure the domains you work in need it a lot, but that's not the whole world. (This is all I mean by "pet feature".)

There are cross-cutting concerns that apply to everything (including concurrent IO libraries) that development work is focusing on. Rust doesn't need to eat Go's/Scala's/Clojure's... lunch 3 months after it was released, taking a year or two to settle in and branch out seems fine to me.


>Rust is targeting more than one domain. Concurrent IO is not important in all domains.

Well, what domains is Rust targeting? Rust still isn't a good fit for embedded/kernel stuff without allocators and OOM handling (not to mention every architecture not targeted by LLVM).

That leaves userland applications, and how many applications don't need concurrent IO?

I'm definitely being impatient, and I'm sorry for that. It's just frustrating to see only one core member working on it.


The standard library is different to the language itself: the language certainly doesn't disallow allocators/OOM handling, it's just the design of std. Furthermore, the standard library is layered: there's `std` with various OS-required routines (IO, etc.) and `core` that is the core stuff that doesn't require any of that. Operating systems/embedded applications can generally use `core`. However, even that is better than C/C++, where one usually has a from-scratch "standard" library (which is possible to do with Rust too): not being able to use the compiler-bundled `std` doesn't seem like a point against Rust.

Many user-space applications don't do heavy network work, which is where concurrent IO is most necessary. E.g. games or a scientific simulation, even web-browsers don't need of concurrent IO (they're not trying to juggle thousands of connections).


>Operating systems/embedded applications can generally use `core`.

Which doesn't, correct me if I'm wrong, provide a stable allocator API or OOM handling yet.

>E.g. games

When your client pings 400 servers, how is it doing that? How is the game server implemented?

>a scientific simulation

That runs on one machine, and doesn't do a lot of disk IO? (concurrent file IO matters too).

>even web-browsers

Open the Chrome dev tools "network" tab, and then open Gmail. How many requests did it make?

Again, concurrent IO is important.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: