After reading all that, my only thought is: "Now let's see you debug that."
IMHO the more abstract and complex a system is, the more chances that something will go wrong. When one has to figure out what's actually happening among a huge mess of overly-abstracted code, simple and straightforward always wins. Perhaps it's some kind of job security.
Having been one of the earliest non-author adopters of https://p3rl.org/Future the difference between having a simple (FSVO) callback and a 'thing' with a reasonably predictable API shape was huge once you were doing more than one thing at once (which, if you aren't, async may be overkill anyway).
Plus with a known interface between the code completing the work and the code wanting the result of the work gives you a much clearer boundary rather than having to keep all of the specific async model underlying it in your head (you'll still need -some- of it while debugging, sure, but much less).
So while your view about over-abstraction is often on point, I think in this case people will find that in practice it's no harder - and in some cases easier - to debug code written to the proposed standard.
I have had to drag people kicking and screaming into the Future before now, but in the end they've all thanked me.
I always kind of scratch my head at this kind of language extension. In my world there are legacy projects and new projects. Why would I want introduce this into a legacy codebase? If it’s a new project where async language features are important, why wouldn’t I use a language better suited for that?
One reason is new applications in legacy enterprises that are slow at adopting newer/better languages for a variety of reasons. I used to work as an embedded SWE for a Fortune 500 that made heavy use of C++ and was very resistant to Rust/Zig/others, but was much more willing to adopt newer versions of C++ for new projects. Although I would have preferred to work in Rust, having features like these would have been great in that environment.
This is not a language extension, it is a pure library feature. C++ has had async/await for a while, so it is perfectly suited for that; sender/receivers build on top of that functionality.
In any case, Eric Niebler now works at NVidia, which has expressed an interest on Senders/Receivers as they are not just for networking, but a general mechanism for composition of async operations.
I still have my concerns, as I think the heavily templating will come at a cost of clarity, but I haven't tried them on a large application yet.
When reading the blog, I saw these really long, complex bits of code and was thinking to myself, "_this_ is simpler?!"
It turns out those long complex pieces of code are not something the consumer, the program writer, me and you, will ever need to see. If the author reads this, I apologise for saying so, but IMO the blog post aims well but misfires: it needs to communicate to C++ developers what code _they_, ie _us_ normal folk, will see and write. Otherwise we'll get scared off. (But, I do note your absolute enthusiasm shines through and that carried me through the entire post :))
> At this point you may be wondering what’s the point to all of this. Senders and receivers, operation states with fiddly lifetime requirements, connect, start, three different callbacks — who wants to manage all of this? The C API was way simpler. It’s true! So why am I so unreasonably excited about all of this? The caller of async_read_file doesn’t need to care about any of that.
This is in "step 6", about ten screens into the blogpost. Good thing I read the whole thing!
So, THIS is the takeaway. You have old code (C-style, like the Win32 API):
/// Old-style async C API with a callback
/// (like Win32's ReadFileEx)
struct overlapped {
// ...OS internals here...
};
using overlapped_callback =
void(int status, int bytes, overlapped* user);
int read_file(FILE*, char* buffer, int bytes,
overlapped* user, overlapped_callback* cb);
> You want to use senders because then you can stitch your async operations together with other operations from other libraries using generic algorithms from still other libraries. And so you can co_await your async operations in coroutines without having to write an additional line of code.
> Why do we need senders when C++ has coroutines? [...] this isn’t an either/or. Senders are part of the coroutine story.
And actually -- that looks pretty good to me!
And if the author reads this, if I may suggest, put this first! (Combined with the note about multiple different APIs all made cohesive, give two examples perhaps.) But sell the solution, then explain the intricate mechanism.
And, great work. I can see myself writing code using this. I like it.
> since Asio was effectively kicked out of standardisation
I saw that that happened but didn't have time to dig into it. Asio has been in use for a couple of decades so was surprised it didn't pass muster. Could anyone summarize what went wrong?
Admittedly what Nebler is talking about does look a lot simpler, but is it a reasonably full replacement for the functionality of Asio?
It is still not clear to me what happened. One issue is that there was no big company behind asio and some companies were looking for a more general framework.
Still for what I have seen, sender/receiver only deal with composing async operations, the proposal so far still lack an actual network executor, so it is far from being ready.
This is a shame I think, because while asio might not have been perfect it means that in 2024 C++ still lacks a network library.
IMHO the more abstract and complex a system is, the more chances that something will go wrong. When one has to figure out what's actually happening among a huge mess of overly-abstracted code, simple and straightforward always wins. Perhaps it's some kind of job security.