Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Only async/await tracks the "good" kind of blocking, yet lets the "bad" kind go untracked.

The “bad kind” is indistinguishable from CPU intensive computation anyway (which cannot be tracked), but at least you have a guarantee when you are using the good kind. (Unfortunately, in JavaScript, promises are run as soon as you spawn them, so they can still contain a CPU heavy task that will block your event loop, Rust made the right call by not doing anything until the future is polled).

> The exact same number as you would for doing it with `await Promise.all`

From a user's perspective, when I'm using promises, I have no idea how it's run behind (at it can be nonblocking all the way down if you are using a kernel that supports nonblocking file IOs). This example was specifically about OS threads though, not about green ones (but it will still be less expensive to spawn 1M futures than 1M stackful coroutines).

> Maybe they are, but async/await are the exact same construct only that you have to annotate every blocking function with "async" and every blocking call with "await". If you had a language with threads but no async/await that had that requirement you would not have been able to tell the difference between it and one that has async/awaitof

I don't really understand your point. Async/await is syntax sugar on top of futures/promises, which itself is a concurrency tool on top of nonblocking syscalls. Of course you could add the same sugar on top of OS threads (this is even a classic exercise for people learning how the Future system works in Rust), that wouldn't make much sense to use such thing in practice though.

The question is whether the (green) threading model is a better abstraction on top of nonblocking syscalls than async/await is. For JavaScript the answer is obviously no, because all you have behind is a single threaded VM, so you lose the only winning point of green threading: the ability to use the same paradigm for concurrency and parallelism. In all other regards (performance, complexity from the user's perspective, from an implementation perspective, etc.) async/await is just a better option.



> which cannot be tracked

Of course it can be tracked. It's all a matter of choice, and things you've grown used to vs. not.

> but at least you have a guarantee when you are using the good kind

Guarantee of what? If you're talking about a guarantee that the event loop's kernel thread is never blocked, then there's another way of guaranteeing that: simply making sure that all IO calls use your concurrency mechanism. As no annotations are needed, it's a backward-compatible change. That's what we're trying to do in Java.

> but it will still be less expensive to spawn 1M futures than 1M stackful coroutines.

It would be exactly as expensive. The JS runtime could produce the exact same code as it does for async/await now without requiring async/await annotations.

> Async/await is syntax sugar on top of futures/promises, which itself is a concurrency tool on top of nonblocking syscalls.

You can say the exact same thing about threads (if you don't couple them with a particular implementation by the kernel), or, more precisely, delimited continuations, which are threads minus the scheduler. You've just grown accustomed to thinking about a particular implementation of threads.

> The question is whether the (green) threading model is a better abstraction on top of nonblocking syscalls than async/await is

That's not the question because both are the same abstraction: subroutines that block waiting for something, and then are resumed when that task completes. The question is whether you should make marking blocking methods and calls mandatory.

> In all other regards (performance, complexity from the user's perspective, from an implementation perspective, etc.) async/await is just a better option.

The only thing async/await does is force you to annotate blocking methods and calls. For better or worse, it has no other impact. A clear virtue of the approach is that it's the easiest for the language implementors to do, because if you have those annotations, you can do the entire implementation in the frontend; if you don't want the annotation, the implementors need to work harder.

Of course, you could argue that you personally like the annotation requirement and that you think forcing the programmer to annotate methods and calls that do something that is really indistinguishable from other things is somehow less "complex" than not, but I would argue the opposite.

I have been programming for about thirty years now, and have written extensively about the mathematical semantics of computer programs (https://pron.github.io/). I understand why a language like Haskell needs an IO type (although there are alternatives there as well), because that's one way to introduce nondeterminism to an otherwise deterministic model (I discuss that issue, as well as an alternative -- linear types -- plus async/await and continuations here: https://youtu.be/9vupFNsND6o). And yet, no one can give me an explanation as to why one subroutine that reads from a socket does not require an `async` while another one does even though they both have the exact same semantics (and the programming model is nondeterministic anyway). The only explanation invariably boils down to a certain implementation detail.

That is why I find the claim that even when two subroutines have the same program semantics, and yet the fact that they differ in an underlying implementation detail means that they should have a different syntactic representation, is somehow less complex than having a single syntactic representation to be very tenuous. Surfacing implementation details to the syntax level is the very opposite of abstraction and the very essence of accidental complexity.

Now, I don't know JS well, and there could be some backward compatibility arguments (e.g. having to do with promises maybe), but that's a very different claim from "it's less complex", which I can see no justification for.


There are two different things: semantic and syntax.

From what I understand now, you are arguing about syntax: we should not need to write “async” or “await”. I'm not really going to discuss this, because as you said, I do like the extra verbosity and I actually like explicit typing for the same reason (Rust is my favorite, with just the right level of inference) and I'm not fond of dynamic typing or full type inference. This is a personal taste and that isn't worth arguing about.

On the other hand, there is also a semantic issue, and sorry I have to disagree, stack-ful and stack-less coroutines don't have the same semantic, they don't have the same performance characteristic nor they do have the same expressiveness (and associated complexity, for users and implementers). What I was arguing that if you want the full power of threads, you pay the price for it.

But from what I now understand, you just want a stackless coroutine system without the extra async/await” keywords, is that what you mean?


Where are you getting the idea that (one-shot) delimited continuations (stackful) "don't have the same performance characteristics" as stackless continuations, especially in a language with a JIT like JS? Also, "stackless coroutines without async/await" would give you (stackful) delimited continuations (albeit not multi-prompt). The reason Rust needs stackless coroutines is because of its commitment to "zero-cost abstractions" and high accidental complexity (and partly because it runs on top of a VM it doesn't fully control); surely JS has a different philosophy -- and it also compiles to machine code, not to a VM -- so whatever justification JS has for async/await, it is not the same one as Rust.

As to semantic differences, what is the difference between `await asyncFoo()` and `syncFoo()`?

BTW, I also like extra verbosity and type checking, so in the language I'm designing I'm forcing every subroutine to be annotated with `async` and every call to be annotated with `await` -- enforced by the type checker, of course -- because there is simply no semantic difference in existence that allows one to differentiate between subroutines that need it and those that don't, so I figured it would be both clearest to users and most correct to just do it always.


> Where are you getting the idea that (one-shot) delimited continuations (stackful) "don't have the same performance characteristics" as stackless continuations, especially in a language with a JIT like JS?

No matter the language, doing more work is always more costly than doing less… Because of the GC (and not the JIT) at least you can implement moving stacks in JS, but that doesn't mean it comes for free.

> Also, "stackless coroutines without async/await" would give you (stackful) delimited continuations (albeit not multi-prompt). The reason Rust needs stackless coroutines is because of its commitment to "zero-cost abstractions" and high accidental complexity (and partly because it runs on top of a VM it doesn't fully control); surely JS has a different philosophy -- and it also compiles to machine code, not to a VM

???

> As to semantic differences, what is the difference between `await asyncFoo()` and `syncFoo()`?

That's an easy one. Consider the following :

  GlobalState.bar=1;
  syncFoo();
  assert(GlobalState.bar==1);
The assert is always true, because nothing could have run between line 2 and 3, you know for sure that the environment is the same in line 3 as in line 2.

If you do this instead:

  GlobalState.bar=1;
  await asyncFoo();
  assert(GlobalState.bar==1);
You cannot be sure that your environment in line 3 is still what it was in line 2, because a lot of other code could have run in between, mutating the world.

You could say “global variables are a bad practice”, but the DOM is a global variable…


> No matter the language, doing more work is always more costly than doing less… Because of the GC (and not the JIT) at least you can implement moving stacks in JS, but that doesn't mean it comes for free.

It comes at extra work for the language implementors, but the performance is the same, because the generated code is virtually the same. Or, to be more precise, it is the same within a margin of error for rare, worst-case work that JS does anyway.

> ???

Rust compiles to LLVM, and it's very hard to do delimited continuations at no cost without controlling the backend, but JS does. Also, because Rust follows the "zero-cost abstractions" philosophy, it must surface many implementation details to the caller, like memory allocation. This is not true for JS.

> The assert is always true

No, it isn't. JS isn't Haskell and doesn't track effects, and syncFoo can change GlobalState.bar. In fact, inside some `read` method the runtime could even run an entire event loop while it waits for the IO to complete, just as `await` effectively does.

Now, you could say that today's `read` method (or whatever it's called) doesn't do that, but that's already a backward compatibility argument. In general, JS doesn't give the programmer any protection from arbitrary side effects when it calls an arbitrary method. If you're interested in paradigms that control global effects and allow them only at certain times, take a look at synchronous programming and languages like Esterel or Céu. Now that's an interesting new concurrency paradigm, but JS doesn't give you any more assurances or control with async/await than it would without them.


> This is not true for JS.

JavaScript is much more constrained than Rust, because of the spec and the compatibility with existing code. The js VM has many constraints, like being single threaded or having the same GC for DOM nodes and js objects for instance. Rust could patch LLVM if they needed (and they do already, even if it takes time to merge) but you can't patch the whole web.

> fact, inside some `read` method the runtime could even run an entire event loop while it waits for the IO to complete, just as `await` effectively does

No it cannot without violating its own spec (and it would probably break half the web if it started doing that). Js is single threaded by design, and you can't change that without designing a completely different VM.

> No, it isn't. JS isn't Haskell and doesn't track effects, and syncFoo can change GlobalState.bar.

Of course, but if syncfoo is some function I wrote I know it doesn't. The guarantee is that nobody else (let say an analytics script) is going to mutate that between those two lines. If I use await, everybody's script can be run in between. That's a big difference.

> because the generated code is virtually the same.

You keep repeating that over and over again but that's nonsense. You can't implement stackless and stackful coroutines the same way. Stackless coroutines have no stack, a known size, and can be desugared into state machines. Sackful coroutines (AKA threads) have a stack, they are more versatile but you can't predict how big it will be (that would require solving the halting problem), so you can either have a big stack (that's what OS thread do) or start with a small stack and grow as needed. Either approach has a cost: big stack implies big memory consumption (but the OS can mitigate some of it) and small stack implies stack growth, which has a cost (even if small).


> Rust could patch LLVM if they needed (and they do already, even if it takes time to merge) but you can't patch the whole web.

You don't need to patch the whole web. V8 could compile stackful continuations just as efficiently as it does stackless ones. It is not true for Rust without some pretty big changes to LLVM.

> No it cannot without violating its own spec (and it would probably break half the web if it started doing that).

Yes, that's a backward compatibility concern. But just as you can't change the existing `read` and need to introduce `asyncRead` for use with async/await, you could just as easily introduce `read2` that employs continuations.

> Js is single threaded by design, and you can't change that without designing a completely different VM.

A single-threaded VM could just as easily run an event loop inside `read` as a multi-threaded one.

> Of course, but if syncfoo is some function I wrote I know it doesn't.

First, it can't entirely be a function you wrote, because it's a blocking function. It must make some runtime call. Second, this argument works both ways. If it's a function you wrote, you know if it blocks (in which case any effect can happen) or not.

> You can't implement stackless and stackful coroutines the same way.

Your entire argument here is just factually wrong. For one, all subroutines are always compiled into suspendable state machines because that's exactly what a subroutine call does -- it suspends the current subroutine, runs another, and later resumes (but you need to know how it's compiled, something V8 knows and Rust doesn't, as Rust runs on top of a VM). But even if you want to compile them differently for some reason, a JIT can compile multiple versions for a single routine and pick the right one according to context without any noticeable peformance cost.

For another, it is true that you don't know how much memory you need in advance, but the same is true for stackelss coroutines: you allocate a well-known frame for each frame, but you don't know how many frames your `async` calls will need. All that is exactly the same for stackful continuations. In fact, you could use the exact same code, and represent the stack as a linked-list of frames if you like(that's conceptually how async/await does it). There is just no difference in average performance, but there is more work. The allocation patterns may not be exactly the same (so maybe not recommended for Rust), but the result would be just as likely to be faster as slower, and most likely just the same, as async/await in JS, a language where an array store can cause allocations.


This discussion is just outright ridiculous. Good day.


Good day!


> > [CPU intensive computation,] which cannot be tracked

> Of course it can be tracked. It's all a matter of choice, and things you've grown used to vs. not.

What? When has the halting problem become “something you've grown used to”?!


It's not the halting problem. You can track worst-case computation complexity in the type system just as you do effects (why don't you jump about the halting problem when you need to "decide" if some effect will occur?). You can Google for it -- there are about a thousand papers on it. Just to give you an intuitive feel, think of a type system that limits the counter in loops. You can also look up the concept of "gas" in total languages. Something similar is also used in some blockchain languages.

You can even track space complexity in the type system.


Are you really summoning non-Turing complete languages to the rescue here? This is hilarious.

We were talking about JavaScript, remember?


First, no rescue is needed. I said that not tracking complexity is a matter of choice, and you mistakenly thought it amounts to deciding halting; I pointed out the well-known fact that it isn't. Second, you don't need a non-Turing-complete language in order to track computational complexity in the type system, just as you don't need a language that cannot perform IO in order to track IO in the type system.

As to JavaScript, there are much more commonplace things that it doesn't track in the type system, either, and they're all a matter of choice. There is no theory that says what should or shouldn't be tracked, and no definitive empirical results that can settle all those questions, either. At the end of the day, what you choose to track is a matter of preference.


> I said that not tracking complexity is a matter of choice, and you mistakenly thought it amounts to deciding halting

And you implicitly acknowledged this fact by using TPL as example: you know, the class of language where all you can write is a provably halting program (that's the definition of Total programming languages!).

The halting problem being a property of Turing-complete languages, you just sidestepped the issue here.


You can track complexity in Turing complete languages, just as you can track effects; neither should bother you more than the other. While doing either precisely amounts to deciding halting, the way it is done in programming languages does not (when we track effects the type system does not tell us an effect will occur, only that it may; similarly for complexity -- we track the worst-case complexity of a given term, which, for Turing complete languages, can be infinite for some terms, but certainly not all).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: