Hacker News new | past | comments | ask | show | jobs | submit login
V8 adds support for top-level await (googlesource.com)
391 points by hayd on Sept 24, 2019 | hide | past | favorite | 295 comments



All this async code without decent locking primitives is leading to a rabbit hole of race conditions...

It doesn't matter that it's all single threaded if all your function calls may or may not block and run a bunch of other code in the meantime, mutating all kinds of state.

I feel like JavaScript developers of the 2020's are going to relearn the same things the C programmers of the 1990's learn, just a few levels of abstraction higher.


Data races are impossible in JavaScript because it's single-threaded. Race conditions are possible in any language that can do anything asynchronous, which is basically all of them. But the general benefit you get from JS being single-threaded is the fact that any given callback is transactional. No other code will ever come in and mutate state between two regular lines of JavaScript code. Achieving this is pretty much the whole point of locking mechanisms, so under normal circumstances they aren't necessary for JS.

That said, when you use async/await instead of promises or callbacks, things do change because two sequential lines of code are no longer two truly sequential instructions. You're muddying the most fundamental metaphor of code syntax. That's why I personally don't like async/await syntax, although I get why people want it.


Ehh, I get where you're coming from, but await is at least explicit about it. Did you promise.then()? Did you call this function by putting an await in front of it? You yielded on purpose. If you need to keep hold of a resource on either side of that explicit action, either don't use either of those constructs, do but write a locking mechanism, or stop and think long and hard about why the lock is necessary.

Granted, my practical experience is limited. I haven't really run into any of these kinds of situations outside of database work in Node, and there you've got the major benefit of transactions to do the locking for you, so the async JavaScript code doesn't have to particularly care and can yield whenever it wants.

For me, the real benefit of await / async so far has mostly just been about improving code flow. Promises were already an excellent solution to the async problem, but their syntax for all but the most trivial example is something only a mother could love. async/await makes the code structure suddenly not necessarily look like callback hell, and in a lot of cases it's much more compact and easier to read. It greatly improves the chances that when I come back to it a month later, I won't have to reach back into the past and slap myself for writing that monstrosity. :)


> two sequential lines of code are no longer two truly sequential instructions

I've done a lot of async/await and I really can't think of any situations where this has been a concern for me. If you're mutating state in method calls without explicitly passing it around, that might be an issue but that's a deeper design issue IMHO.


One use case I often find is a caching layer between JS and a HTTP call. If there are 2 calls to a non cached endpoint, the HTTP will fire off twice before caching the result. You can work around this by returning the first promise from the cache, but this is essentially a mutex. Having locking primitives would solve this and require less boilerplate code.


Caching the promise is a perfectly good solution. Locks aren't needed.


Seems like you should be able to build a lock mechanism fairly simply with async/await, but I agree, it would be nice if it was a built in primitive. It does feel a little silly to implement a lock function when the engine much be using one to support its async functionality in the first place, so you're introducing a lot of inefficiency.


I don't think it's true that the engine must be using a mutex to support it's async functionality. Or that it is true that an OS mutex will be any more efficient than alternative solutions.

A mutex really can only exist to serialize async code. If you want it serialized that likely means that code shouldn't be async in the first place.


You're right that it may not be required for the async functionality (especially since lock-free schedulers exist, and the engine is basically a single processor scheduler for processes). I do think it's likely in use other places though, just because it's much easier to write correct code with one.


Implementing a mutex is very simple and efficient in JavaScript - probably much lower overhead than the Promises itself.


You write this boilerplate to avoid one extra non-cached HTTP call on a (browser?) client? Am I understanding correctly?


One extra call could be a multi-megabyte video chunk, or the code flow might mean this case always occurs, potentially hundreds of times before the promise resolves.


It's really not a lot of boilerplate - once you have the small utility function, it can be one additional line of code.


When your program is single threaded, a simple boolean flag variable can act as a mutex, you don't need a mutex for this.

Mutexes are for situations where flag variables can change state between your instructions to check and set a flag. This can only happen in multithreaded or interrupt driven code.


Do you really want to block all of your http calls on the result of the prior call in the off-chance that they might share a result just so you can only cache the results and not the promises?

The reason we don't have a mutex in javascript is that there are better solutions to the problems it solves.


Maybe I don't understand your response, but a map of promises to handle parallel in-flight requests for the same resource (which the grandparent pitched) is basically the most elegant yet simple solution to this common problem.

I don't really see how it's a mutex though, you're just returning the same promise to multiple requests. It's far simpler and doesn't use a locking construct.


A promise is a lock in that resolving it is the 'release' and awaiting it is `acquire`.

For example:

``` let release; const acquire = new Promise(resolve => release = resolve);

(async () => {

  await acquire;
  // use here
  release(); // now others can use it
})(); ```

This is a bit different because multiple people can await the lock here so it's like a "one time multicast mutex" making it more akin to condition variables.

That said - thinking about it as a mutex is a really backwards way in my opinion.


I think it's worth stopping to think in terms of "sequential lines of code". Even at CPU level, "sequential" instructions aren't, for a decade or so, to say nothing of xplicitly async code.

One should think in terms of a dataflow graph, where data-independent nodes can run in any order, or in parallel. One should explicitly think about ordering of effects, and be explicit about effects in general. (Hence the rise of popularity of FP.)


> Even at CPU level, "sequential" instructions aren't

Within the context of the argument you are making, this is disingenuous. There's a big pile of transistors which determine whether it is safe to reorder those instructions.


Thar same pile of transistors, with a thick layer of software, determines whether it's safe to proceed a coroutine.

    const foo = async () {
      const ice = await freeze(water);
      const wCream = await whip(cream);
      const base = await pour(liquor, ice);
      const cocktail = await put(base, wCream);
      return cocktail.serve();
    }
In the above fragment, `freeze` and `whip` may run in any order or in parallel, you don't get to choose. Still `pour` never runs before `freeze` (though it can complete before `whip`), and `put` can only run last. This is because the above is syntactic sugar, and the dataflow graph gets encoded in the promises graph, with `.then` clauses giving an unequivocal dependency order where applicable.

Same in CPU: two loads can run in either order or in parallel, but an ADD that takes the result of both of them will only run when they both complete.


Is there a bug in your example? You await freeze before calling whip. They can't run in any order or in parallel and must complete sequentially. pour also cannot complete before whip since whip is being awaited.

That example is:

    const foo = async () =>
      freeze(water)
        .then(ice => whip(cream)
          .then(wCream => pour(liquor, ice)
            .then(base => put(base, cream)
              .then(cocktail => cocktail.serve()))));


Thanks. I stand corrected.


Maybe there’s a misunderstanding with how async/await works...


Apparently so — thank you!


>"be explicit about effects in general. (Hence the rise of popularity of FP.)"

Yes - the benefits of pure functions in particular are increasingly evident.


> when you use async/await instead of promises or callbacks, things do change because two sequential lines of code are no longer two truly sequential instructions

I thought async/await was just syntactic sugar for promises?


Yes. I think what is meant here, is that two sequential lines with `await` in them are no longer sequential instructions, and other code may execute in between them.

If you were using promises or callbacks it would be the same, except that it won't appear like two sequential lines anymore.


It is; but in my experience that fact can make it harder to see what's truly going on.


No it's not. async/await's semantic is similar to coroutines, and is implemented by them.


Are you sure? From what I can find, async [0] causes a function to a return a promise, which then returns the result. Await [1] takes a promise, and waits for it to either be resolved or rejected.

[0] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

[1] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...


> and waits for it

Yes but that doesn’t mean other lines of code can’t be executing while waiting. The waiting isn’t truly blocking the entire application. Plenty of other things can be happening.


Sure, but the next line of the function awaiting the result can't execute while waiting.


from your first link: "... , async/await is similar to combining generators and promises."


It is true that async functions work similarly to the coroutines found in other languages, and that some transpilers convert async functions into Javascript generator functions under the hood. However, these are both implementation details.

On the flip side, there is a Javascript transpiler called nodent which directly converts async functions into the corresponding `Promise.then` calls, with no generators in sight. This transpiler actually generates smaller & faster output than the standard generator-based approaches, since the async/await semantics map more directly to promises.

In other words, async functions really are semantic sugar for promises, even if they have some similarities to coroutines / generators.

To try it yourself, just go to http://nodent.mailed.me.uk/ , check the "spec compliant" option, and look at the generated output.


Maybe provide some evidence of this, I've also understood them as syntactic sugar for Promises.

Although I have found that babel does seem to transpile them differently to Promises


You're sort of right but it depends on the races and the data structures. If you depend on a remote function and ordering is important lots of problems can happen.

I've definitely needed more locking primitives and ordering but it's mostly for remote code.

There are also a ton of bugs that can happen with memory allocation.

For example if you do 50 async and they all come back and once and try to all allocate memory and then perform some action at the end you can have too many in flight and then use too much memory.

It's not really threaded but you can run into similar problems.

I've done 10+ years of core JVM threading with raw threads and softare transactional memory datastructures and CAS operations and my experience there helps a ton.

Async is definitely not just 'free'


Since an async function is simply a function that returns a promise is there actually any difference between using async/await and using promises explicitly?


It takes longer to boil your brain.


If you're compiling to ES2016 or lower, it will turn the promises into state machines.


Terseness and readability towards less asynchronously minded folk


None, as far as I know


"logical" race conditions can still occur. Eg. 3 lines of code should run without interrupt. The inclusion of an async operation within this block now causes context switch (execution switch). I've had to debug this tricky situation in external libs before.


I don’t understand how the async/await syntax changes anything. It’s really just syntactic sugar used to manipulate promises isn’t it?


> two sequential lines of code are no longer two truly sequential instructions

How is this different from normal multithreading?


Normal, which is to say preemptive, threading can pause your execution arbitrarily.

You can wedge a NodeJS interpreter by not yielding, but you also have control over when that yielding happens.


I suppose it's not, but it is different from normal JavaScript. I guess in multithreaded languages it's not such a big leap.


> But the general benefit you get from JS being single-threaded is the fact that any given callback is transactional. No other code will ever come in and mutate state between two regular lines of JavaScript code. Achieving this is pretty much the whole point of locking mechanisms, so under normal circumstances they aren't necessary for JS.

The same claim can be made for Visual Basic. This always sounds like a stronger guarantee than it actually is. In practice, the exact same mistakes get made irrespective of what the runtime provides.

Ultimately, people have to learn about race conditions in order to write correct code in a nontrivial system. Brushing the 'majority' of such cases under the carpet just makes it harder to educate them.


Javascript devs, for all their flaws, understand async programming far, far better than the average C programmer.

Indeed, your assertion that we need "locking primitives" to counteract "race conditions" is evidence of that. Yes, JavaScript can have race conditions, but not multi-threaded race conditions that cause resource contention [0].

So what good would locking primitives be?

And as a solution to single-threaded race conditions, it's becoming more and more common to use pure functions and immutable data structures in Javascript. In the ReactJS world, it's practically standard to use immutable data structures.

Furthermore, JS devs know how to structure their code, either using nested callbacks or promises, or now, async/await, to avoid async data races. Anyone who programs primarily asynchronously understands these things.

So in summary, it's pretty arrogant to assume that JS devs are going to have to "re-learn" the same things that C programmers learned in the 1990s (also ignorant, because 1990s C was decidedly synchronous). The only devs I know that struggle with race conditions in Javascript are those who are coming from another language or paradigm and who fundamentally fail to understand asynchronous programming.

0. The only exception to this would be those NodeJS devs who use multiple processes or browser devs using web workers along with the ultra-new and not very well-supported shared (memory) resources, both of which are rare in the JS world because there's not much need. You can achieve adequate performance off of one thread for practically any IO-bound application unless you're operating at Facebook scale. And in those case when people are using multiple processes for CPU bound algorithms, they're almost always using them with async message queues anyway, which obviates the need for any locking primitives.

Edit: https://news.ycombinator.com/item?id=21065831 in this case, async/await can introduce a problem. But using an immutable data structure reference as would almost always eliminate this issue.

Also, in practice you're probably using a database whose library has transactions, so you'd "lock" the transaction in this way.

But OK, if you use async/await or generators along with mutability, then a locking primitive could be useful, I'll concede. Although in a single-threaded program, a boolean is just as good.


Here's an example of how you might get a race condition in JS:

    async function deduct(amt) {
        var balance = await getBalance();
        if (balance >= amt)
            return await setBalance(balance - amt);
    }
One way to resolve this would be with a mutex to protect the balance during the critical section (which is async). What would you suggest instead?


I mean the answer here is an async getBalance and setBalance is the incorrect way to do this, and a mutex won't solve that.

if balance is a javascript variable, access shouldn't be provided by async functions. In the case that mutex is a remote resource of some sort (file or network resource) a process local lock won't solve this for you either.

It seems to me that the lack of locks make forces software engineers to consider the nature of their data instead of just grabbing for the inappropriate os lock.


More subtly, here's an example of a hidden race condition in JS:

  async function totalSize(fol) {
    const files = await fol.getFiles();
    let totalSize = 0;
    await Promise.all(files.map(async file => {
      totalSize += await file.getSize();
    }));
    // totalSize is now way too small
    return totalSize;
  }


I would have written it in a more functional way:

  async function totalSize(fol) {
    const files = await fol.getFiles();
    const sizes = await Promise.all(files.map(file => file.getSize()));
    return sizes.reduce((acc, size) => acc + size);
  }


This is an interesting example because it demonstrates a way to confuse programmers that wasn't previously possible. Thanks for sharing it; it was a gem on an article otherwise full of confused comments.

After sharing it at work, someone pointed out that it isn't technically a race condition. The problem isn't caused by operations happening in an unpredictable order. It's unlikely that any of the promises are already resolved, so the lhs is always evaluated first. It's just that the programmer is surprised by the order of operations.

The takeaway is unsurprising: that `await` should be a signal to make one think carefully about state change; having `await` right after `+=` should be a big signal.

But the fact that an unwary programmer can actually be tripped up is interesting, despite some other comments on this article claiming such trip-ups are inevitable.


Because all the promises are waiting on `fileSize`, right?

But do you mean that JS 1. will read `totalSize` then 2. do the asynchronous call, then 3. add and set? Seems like it's ambiguous and JS could just as easily read `totalSize` after the call, and all would be OK.

Or is the ordering specified?

Thanks for this clever example!


The ordering is specified left to right.

  totalSize += await getSize()
becomes

  totalSize = totalSize + await getSize()
So all the map callbacks run one by one, read totalSize as 0, and then suspend waiting for getSize(). Each one then resolves and assigns totalSize to be 0 + the size.

The race is what order the getSize() calls return in since only the last one will control the return value. Otherwise the issue isn't a race but just a logical ordering bug.

(This isn't super different than doing array[two()] = one() since two will actually run first, so ex. array[i += 1] = i will modify i before assigning the value.)

Correct would be to change the body of the map to:

  const fileSize = await file.getSize();
  totalSize += fileSize;
or the whole function to:

  async function totalSize(fol) {
    const files = await fol.getFiles();
    const sizes = await Promise.all(files.map(file => file.getSize()));
    return sizes.reduce((totalSize, size) => totalSize + size, 0);
  }


Wow, why would you define it this way. I had to check the spec because I didn't believe you. They got #2/#3 backwards.

https://es5.github.io/#x11.13.2


What if you change it to

  totalSize = await file.getSize() + totalSize; 
Would it be fine then?


I mean, if something like this is behind a promise or an async call, that probably means that there's I/O involved (no point using async to access local in-memory data), which means that the local locking primitive won't be incredibly useful, you're going to have to lock it via whatever mechanism the I/O channel (or some API using the I/O channel) provides, such as file locking or some API call or something.


Yep, that's the benefit of having to explicitly mark each function explicitly as async. I've heard a lot of people complaining that it's painful, especially during refactorings since it bubbles up, but that's precisely the point: it allows you to know (and choose) whether a function can or cannot be executed interleaved with something else.

That said, if new developers just blindly use async/await without understanding what's going on ... well that's another problem.

The field is filled with foot-guns, for some definition of guns (and for some definition of foot).

The problem with C, or better the problem with pre-emptive multitasking in general, is that even relatively knowledgeable developers were constantly hitting subtle issues with the memory model. Consider the good old trap of efficiently lazy-initializing a Singleton in Java: https://en.m.wikipedia.org/wiki/Double-checked_locking#Usage...


If the balance is handled in a remote server or location then you should have a remote deduct method which handles the operation in an atomic idempotent manner.

If you are handling data locally e.g in a file then you would ideally rely on OS write lock and only store the deltas and never change state. You could then calculate the balance from the log of deductions and insertions.

FYI this problem has not anything to do with async or JS.


Don't have separate get / set functions?

    async function changeBalance(account, amt) {
        /* 
         * BEGIN;
         *
         * UPDATE accounts
         * SET balance = balance - amt
         * WHERE id = ${id};
         *
         * COMMIT;
         */
    }
Of course, this is just a mutex in database transaction clothing :)


>> What would you suggest instead?

On the front-end, I'd do a call to the server. On the server, I'd use a database transaction for that.

But I get your point. The way I deal with race condition in redux is to have a mutex. I.e. While something is being fetched, any call to this command is either ignored or queued up.

I use redux-saga for the "race-condition" and general flow, and call the various async functions from within sagas.


How about an optimistic lock?

    async function deduct(amt) {
        var balance = await getBalance();
        if (balance >= amt)
           await setBalance(balance - amt);

        var newbalance = await getBalance();
        if (newbalance != (balance - amt)) {
            await setBalance(balance + amt);
            // tell the user the transaction failed...
        }
    }


This is getting off topic, but what would probably be best here is a test and set operation: setBalance(expectedBalance, newBalance)


And they call average C developer less experienced in async. The only correct ‘update balance’ operation is:

  INSERT INTO CashFlow
    (date, income, expense)
  VALUES
    (:date, :income, 0)
Get balance:

  SELECT sum(income)-sum(expense) AS balance
  FROM CashFlow
What modern js still has to reinvent is in-client synchronizing storage which naturally resolves who waits on what (if at all). This is partially simulated by reactjs now by trading in a developer’s comfort at near zero price.

All this “await/promise has clear semantics and we think async” is pointless hope because code should never race with itself. Data should, and the storage must be there to kill associated problems once and forever.


Operational transformations are not the "only correct" way to update an integer and it comes with a number of performance and memory/storage consumption implications.

>What modern js still has to reinvent is in-client synchronizing storage which naturally resolves who waits on what (if at all)

I don't understand what this means but it makes me curious. What would be an example of this in one of the languages that have already "reinvented" it?


SQL.

>it comes with a number of performance and memory/storage consumption implications.

Instead of selecting sum(), CREATE TRIGGER then and update a singleton on insert. You can even create a view with N recent transactions and catch inserts into it to maintain that N, if you don’t want a full history.


How about an `updateBalance(updateFn)` that is protected by a mutex:

    async function deduct(amt) {
      await updateBalance(balance => {
        if (balance >= amt) {
          return balance - amt;
        } else {
          throw new Error("not enough balance");
      })
    }


Mutex specifically refers to OS/processor level constructs that protect critical sections using algorithms such as the bakery algorithm or special CPU instructions. None of these are required to protect the critical section in the example above.

You can just use a busy flag. I am not familiar with JS, so this is approximate syntax.

while (busy) { await busyIsFalse } busy = true

Critical Section

busy = false

notify busyIsFalse

Simple boolean flags will work


Use the datastore... use atomic updates and transactions update balance, where balance >= amount, set balance = x


Can anyone explain this to me, how is it a race condition?


Two concurrent calls to that code. Both of them get the same balance (100), pass the test and deduct the amount (75), getting to a -50 balance.


I correct myself. It leads to a 25 balance but possibly a double spend of those 50 dollars in two transactions.

The standard pattern is to use database row locking or calling a stored procedure that performs locking inside. Backend developers typically don't like the second solution but it provides a kind of API. Not to be overlooked if there are multiple services accessing it, especially in a polyglot environment.


Just because code is written in an asynchronous style doesn't prevent correctness errors that would not exist with locking. For instance, using everyone's favorite example, a bank:

In no particular order:

    1. Pizza company debits by balance by $5
    2. I withdraw $5
Assuming both those operations can yield due to e.g. async requests and that they can be resumed at any point, I don't know which order they will yield or resume in.

Consider this interleaving:

    1.1. Get balance, I have $5
    2.1. Get balance, I have $5
    1.2. Set balance to $0
    2.2. Set balance to $0 (but there are total debits of $10!)
With locking:

    1.0. Acquire account lock
    1.1. Get balance, I have $5
    2.1. Get balance: wait on lock
    1.2. Set balance to $0
    1.3. Release account lock
    2.1. Get balance, I have $0
    2.2. Can't set balance, will overdraw!
With regards to resource contention, anything that causes a waiter graph cycle can cause deadlocking, regardless of single- or multi-threading. I can't think of a compelling example here, but nobody expects to have deadlocks yet they still happen :)


> With regards to resource contention, anything that causes a waiter graph cycle can cause deadlocking

Maybe I'll learn something here, but can you explain how an async runtime with no locking primitives like JS could cause a waiter graph cycle?


You don't need locking primitives for a deadlock. Await is sufficient.


An example of a deadlock that continues the bank balance analogy:

A transfer funds method that does: 1. Lock account A 2. Get balance and check 3. Lock account B 4. Debit account A 5. Credit account B

If the two accounts transfer to each other at the same time, you can get: 1. A->B: Lock account A 2. A->B: Get balance and check 3. B->A: Lock account B 4. B->A: Get balance and check 5. A->B: Lock account B -- deadlock

One solution is to always acquire locks in the same order and release all of them when you fail to obtain one. You can do this in the transfer example by sorting by account ID (so always lock A before B).


Sure enough, but in javascript if your balance operations can yield due to async requests than you already have a bigger problem than a shared in-memory variable. Which is really the only case a standard os-lock can solve.

Let's say that balance is in a database or behind a REST api. An os-lock won't solve the problems of other processes trying to update that same resource.


You can imagine a function being unintentionally promoted into async-world for some kind of cross-cutting concern even if the data is stored in memory, e.g. if logging requires an async call, or if reporting metrics requires an async call.

While I agree that in the general practice in JavaScript programming stops you from this kind of footgun, you can still be caught off-guard if you end up thinking that this programming style is immune to this kind of problem: locking is a big hammer with its own problems, but it will never cause data incorrectness.


> Just because code is written in an asynchronous style doesn't prevent correctness errors that would not exist with locking.

Yes, that's why I referenced passing around a reference to an immutable data structure, which is fairly common in JS.


How will you update the bank account with an immutable data structure? You need to mutate the balance somewhere.


Well, in practice you're presumably using a database, which has transactions in the API you're using.

But to answer your question, in functional programming (also frequently ReactJS, if you use libraries like Redux), you don't mutate the data structure. You create a new data structure based off the old one.

Am I misunderstanding your question? Just because you use an immutable data structure doesn't mean it can't be updated.


Database written in not(Javascript) of course, since it has to work and handle concurrency properly.


Yes, of course. Why would you build a database in JS?

And does gmail not work well for you?

Most of us are only using JavaScript because that's the only way to build browser applications (or compile-to-JS languages, which still require an understanding of JS or you'll run into problems).

In any case, I don't particularly like JS, so your dig fell short. But JS is necessary for many of us.


> "Most of us are only using JavaScript because that's the only way to build browser applications (or compile-to-JS languages, which still require an understanding of JS or you'll run into problems)."

The javascript apocalypse is inevitable, we're just yet to reach the tipping point where this hard-requirement to use JS for web/browser applications is no longer there.


> also ignorant, because 1990s C was decidedly synchronous

There was definitely threading in C in the 1990's, and threads are asynchronous by nature. There's a whole class of synchronization bugs that C and C++ developers had to learn to deal with, and while JavaScript developers get to avoid some by the nature of there being a single thread, that doesn't necessarily exempt them from all of them.


Threading implementation was async, not the programming model. Async programming style/model(as far as I'm aware) refers to the use of callbacks or coroutines, neither of which was common at all in C in the 1990s.

Indeed, nginx caused big waves due to its superior IO performance as the first popular async http server released in 2004.

But correct me if you have a different understanding of the term.


Your post manages to say nothing correct. Javascript did not invent callbacks, asynchronous programming, or event driven programming. Nodejs popularized it in the 2010s but you are claiming credit for something that has been in common use since the 70s. I have no clue how anyone can speak with such authority while clearly having not done a cursory glance at programming APIs available in the 80s and 90s.

Lighttpd was released before nginx and was wildly popular for a good while. Not to mention other servers like AOLserver in the 90s.

The use of non blocking sockets was nothing new, used heavily in C code throughout the 90s and is the basis of asynchronous processing (refer to Stevens Unix network programming). You should also read documents by John Ousterhout written in the 90s about this topic.

Now just about every major GUI library was targeted in C or Pascal and used an event driven callback model. You can refer to the Windows API, the modern Mac carbon API which was developed in the 80s at Next and the older Mac SDK. The Windows SDK allows event driven programming for UI and IO. tcl/tk. Just about anything on top of X (ie Motif, GTK). The list will go on and on.

Do some research before making bold claims about C programmers not understanding asynchronous programming and callbacks.

Also I think the other reply is correct in asserting that your definition of async is making distinctions without differences.

Eg from 1995: https://web.stanford.edu/~ouster/cgi-bin/papers/threads.pdf Note that event driven programming was nothing new in 1995.. it was the basis for the Windows api designed years prior after all, but this was around the time that multithreading was pushed for everything.

Heck fibers: https://docs.microsoft.com/en-us/windows/win32/procthread/fi... have existed in the Windows api since at least 95.


I'm referring to Os level threads, but functionally it's no different if we talk about forking with shared memory. Given that the vast majority of computing 15 years ago was single processor and multi-process or multi-thread for ansynchronous behavior (or use of select), it's all functionally equivalent, and many of the same problems discovered many decades ago for multi-process programs apply.

If I have a main program, and I fork a child to handle a task, and use shared memory to communicate, how is that any different than Javascript executing and and async call that sets or returns a value? The Javascript runtime has the same behavior as the OS scheduler in this, and if there's a single processor, it necessarily will only execute one instruction at a time. There's still plenty of pitfalls to worry about and that's why locks were (and are) useful, and why they are included with OS thread implementations (which are really just a nice API on forks and shared memory often dealt with by the OS for additional benefit).


The most common need I've found for some kind of locking mechanism in JavaScript is the very basic example of asynchronous functions triggered by UI. For example, button -> http call -> navigate. If the user presses the button twice, the call is made twice before navigating, which has unexpected results in some cases.

The lock in this case could be as simple as disabling the button as soon as the user clicks on it, but I've also find Promise-based locking functions to be very useful in solving these kind of problems.

I wouldn't call these "primitives" though - you don't need a language construct for it. It's easy enough to build the locks as functions working with Promises.


> (also ignorant, because 1990s C was decidedly synchronous)

unless you were programming in... any GUI toolkit ever except the most toy ones ? even win 3.1 GUI primitives were async


Yeah, fair point. Technically, windows was C++ but close enough, plus as you say other Gui toolkits also used async. And I should have remembered that since Gui libraries used async for the same reason as browser JS. It's not good to block the UI thread.

I was thinking about C network programming, which despite what people are saying on here, was not event driven in the 1990s (I was there).


> 1990s C was decidedly synchronous

You know that Javascript is a C(++) program? That when you use TCP, the protocol is in C? The ethernet driver is written in C? The OS scheduler is written in C? That you're programming in a little sandbox, and all the concurrency around you in managed in C?

There has never been anything synchronous about C.


> You know that Javascript is a C(++) program?

JavaScript is not implementation-defined. There's a spec, and there are interpreters in a number of different languages. Sure, most common JS interpreters/JIT and otherwise, are in C++, but that says nothing about whether or not they use an event-driven style underneath to program the interpreter.

And regarding C being synchronous, I'm talking about programming models, not underlying architecture of the computer or OS. If everything is programmed in async as you seem to suggest here, then we do we bother distinguishing the two? Why do most books on networking programming have a separate chapter devoted to async or event-driven programming? Why does the Unix socket API have `socket.setblocking(0)`?

But I suspect you know damn well what I'm talking about.

I've got to get off of social media.


No matter what language you are using, concurrency happens in instructions, interrupts, cores, caches, devices, virtual memory mechanisms, etc, not even getting into GPU architecture. In C you have direct control over these things, you can make the system as concurrent as you want.

In Javascript you have a little window into this through whatever the layer below provided you. So Javascript by definition has a (small) subset of the concurrency you can get in a systems language.

And no I'm not entirely sure what you're talking about. It sounds like you learned concurrency in Javascript, and define concurrency in terms of Javascript primitives. But that's merely a guess on my part.


> "Javascript devs, for all their flaws, understand async programming far, far better than the average C programmer."

Not the junior ones, they don't. I understand the need to want to talk about this advanced topic with only perfect and good developers, but this is almost never the case in concrete scenarios. Developers forget, developers share code with other developers and the consequent spaghetti mess is hard to reason about 100%, developers make mistakes, developers are sometimes yet to learn something, developers miss small bugs, new code deals with library code that might have an async bug, etc.

Sequential flow is much easier to reason about and control for, and if you ask for my opinion, we should only use async features if the benefits of their usage far outweigh the potential complexity and headache that they introduce if you don't "use them the right way".


I am not sure how the parent comment has received so many upvotes. OP has some fundamental misunderstanding of mutexes and the purpose of async io.

Locking primitives are completely unnecessary in any single threaded program. Also mutexes in C or otherwise are way older than 1990. They go back to the 1950s.

When your program is single threaded, a simple boolean flag variable can act as a mutex, you don't need a mutex primitive for this. Mutexes are for situations where flag variables can change state between your instructions to check and set a flag. This can only happen in multithreaded or interrupt driven code.

Infact, the entire purpose of async programming is to stop the use of mutexes and multiple threads to perform concurrent io, which is something that can be performed by a single thread. This is the foundational premise of nodejs.


I think JS devs usually use async/await for network calls and not really for computation or file access. Those type of applications are better served by other languages.

A deadlock in the problem space that JavaScript operates in would be a rarity I feel.


I've been using async/await especially for file access.

    const fs = require('fs').promises
    const path = require('path')
    
    const filePath = path.join(__dirname, 'package.json')
    const fileContents = await fs.readFile(filePath, 'utf8')
    
    console.log(fileContents)


Well I personally would use a synchronous method rather than async for something like that where the next code path depends on the return data; if you can't get the file contents then you want to throw an error straight away.


That would seem to make sense, using readFileSync() instead.

I assume it would really do much the same thing as the await example. Is there any difference?

If they do the same thing, then what's the benefit of async/await? I guess it's that you can now write your own "fs.readFileSync()" or something like that in JavaScript when needed.


fs.readFileSync() blocks the event loop preventing any other code from being run or other events/requests from being processed. Await yields control back so that other requests can be processed while the file is being read.


Ah I see. Genius. So is there ever a use-case for NOT using the Async/Await -version of readFile()? Is fsReadSync() now just legacy which we don't need any more?


Is there an advantage to using "await fs.readFile" over "fs.readFileSync"?


You haven't blocked the entire event loop (i.e., other async functions that may be waiting on I/O) while the file is being read.


Synchronous code blocks the event loop, so in real world code you'd only use the sync-version in some special cases, like when you're reading config files before you "server.boot()" and other scenarios where it's just a one-time cost.

Though the few sync functions in the stdlib become less and less convenient, first when promises came out, then with async/await, and now with top-level await.


JS is single-threaded right so (...I think...) deadlocks are actually impossible. Although I guess you can still get stuck if two pieces of code are waiting on each other to satisfy some condition (and trading control of the sole thread) without using explicit locking.


Deadlocks can happen in any kind of concurrent system with locks. As soon as you start building & using locks with Promises, this is quite easy to happen.

That said, the need for locks in JavaScript is much less than in a typical multi-threaded language, so it's a lot easier to keep track of and avoid deadlocks, if you're even using locks at all.


Yeah, nothing about JavaScript being single threaded implies any resistance to deadlock.


It's still much harder to deadlock single-threaded code than multi-threaded code because single-threading eliminates the need for most locking, which in turn eliminates most opportunities to cause deadlocks.


I love setTimeout


What does setTimeout have to do with deadlocks?


Despite other deficiencies, JavaScript devs are generally pretty good at writing async code, for the simple reason that you get burned suuuper quickly if you try to mutate state outside of an async context when doing things like adding click handlers. This is something most front-end devs learn pretty quickly since it's such a fundamental part of writing JavaScript.


I don't know if you intended to, but it comes across as arrogant and dismissive.

Dealing with asynchrony has been the an issue in JavaScript from the beginning, as the fundamental model is event driven.

There are a number of mechanisms for coordinating work, including mutexes (see Atomics.wait()), but for >99% of code, Promises are a better option.


Since it’s all single-threaded, you can write your own locking primitives and they will work correctly.


> It doesn't matter that it's all single threaded if all your function calls may or may not block and run a bunch of other code in the meantime, mutating all kinds of state.

Can you elaborate a bit more on this? I'm unsure about how locking in a single-threaded environment would work. And how would it really differ from async/await or promises which can handle race conditions already?

One area where the lack-of locking primitives does worry me is the use of shared mutable memory. I've never used these techniques in JS, but if I remember right one can now do something like this:

1. Create a shared memory array

2. Spawn several workers which do work on shared arrays

3. Pass a reference to the array to each worker and let them run.

This seems like an area where we could start to see these problems start appearing. There's a module related to atomic operations [1] but there is no language-level enforcement of using it. There could be good reasons why this isn't worrying but I just don't happen to know that off hand.

[1]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...


In javascript anywhere you see await, you've introduced an explicit scheduling point. There are cases where even in a single threaded env you want to "wait" until some other async process is complete. To use one of the old school examples:

    function transfer(amount, acct1, acct2) {
        var current_balance = await acc1.balance()
        if current_balance > amount {
            await acct1.sub(amount)
            await acct2.add(amount)
        }
    }
Now what happens if you get multiple calls to transfer? What you want is probably something like:

    function transfer(amount, acct1, acct2) {
         await acct1.Lock()
         ....
         await acct1.UnLock()
    }


I don't think it really makes sense to essentially say "Javascript is rife with race conditions like any other language" just because yes, you still need to use transactions when using a database.


You're reading too much into the example. Your 'database' can be your frontend js state. There is nothing in the example that limits the problem to backend development.


Yeah but the impact of race conditions on front-end code (which is always single user) is minimal; unlike the backend where it's catastrophic and not fixable by refreshing the page.


It is fixable by refreshing the page if there are no race conditions in the front-end.

If anything getting this right on the front-end is often extra problematic because of the high cost of round-tripping state to the server.


In that case, the code would be sync and there wouldn't be a "race condition".


I just the other day had to step in to deal with a race condition in a frontend caused by improperly handling async API requests.

It's dangerous to assume that just because JS has a single main thread that you don't need to think about sequencing of operations and locking.


While writing async code can indeed be challenging, I just disagree with the idea that JavaScript is lacking good "locking primitives":

> All this async code without decent locking primitives is leading to a rabbit hole of race conditions...

No, it's not. Here's a lock:

   let lock = Promise.resolve()
   // ...
   lock = lock.then(() => { /* critical section */ })
For the rare instances where more complex async flow control is required, there are a bunch of libraries for that (e.g. async[0], p-queue[1], etc.).

[0] https://caolan.github.io/async/v3/docs.html

[1] https://github.com/sindresorhus/p-queue


Good point! I hadn't thought of it this way since it often doesn't come up as often in my experiences with front end stuff.

To lay out one example as clearly as I can: suppose you had two calls to transfer with the same args.

* The 2nd call to `await acc1.balance()` was very quick beating the 1st call

* The `current_balance` is greater than `amount` and so it runs `await acct1.sub(amount)`

* Before that can finish, the 1st call to `balance()` returns with the old balance before subtraction begins.

* Now another transfer is initiated with the incorrect amount.

If `amount` is larger than `current balance` you've got an issue on your hand.

I believe one could implement a mechanism around this using a set of booleans/ints for each account and managing the call to `transfer` with each one of those. But that's the point - we could have primitives to do that.

By the way, there is a particular issue with this specific example. The `transfer` function signature must be marked as `async`. You would need to add this to make it run properly.


Good example. I think this should be handled by whatever data store is being used, though. In case of SQL, transaction.


You'd need to lock both accounts... but even then. Doing any kind of accounting operations without being atomic, transactional or idempotent is just wrong. Even with single call - acct2.add() could fail and you're left with broken state. If this is in-memory state then you don't need asyncs at all. If it's not, Locks just give you false sense of safety.


There is an Atomics API for that...


Oh, you're going to love (hate) what I suspect will happen then:

> race conditions

I've gotten into multiple arguments with first-language-javascript devs who think race conditions can't happen in javascript "because it's not multithreaded". I'm thinking before they can accept this type of bug, it's going to get a new trendy name first, then all the old knowledge of "race conditions" continue to be ignored.


> All this async code without decent locking primitives is leading to a rabbit hole of race conditions...

I haven’t seen this happening. Maybe it’s because I tend to use Node for web apps too much, which don’t have much shared state within Node to begin with? That it’s single-threaded does matter immensely, though, because if you have shared state that can be updated synchronously, you just write the code and know that it runs as a unit.


In a single-threaded universe, what more do you need to lock the world than a boolean variable?


Since booleans can't be awaited until their state is negated/toggled, I'd say a lock that works like a lock is needed.


So use a Promise like a lock then. If it exists, wait on it. If not create one…

Of course you don’t need to use the existence of a Promise as your Boolean in this case. You can simply use the Boolean state in addition to the Promise because your code is not going to yield while atomically setting one Boolean variable.


That would unblock all coroutines waiting for the promise, instead of just the first one. It's not that trivial, that's the point.


For queued ordering instead of unblocking everyone at once you can reassign the Promise.

  let lock = Promise.resolve()
  const wait = (callback) => lock = lock.then(() => callback());
Until the callback resolves the lock is held, so you can do:

  wait(async () => {
    await step1();
    await step2();
    // release the lock.
  });


If the callback throws an error, none of the awaiting coroutines will run, and adding a catch would break the callback error propagation.


Yes, but he was giving the simplest implementation for the sake of brevity.

Also, nobody said that we need callback error propagation in a priority queue to act the same exact way that callback error propagation works in a straight promise chain... but if you just want to reject all waiters in case of an error, you can do that.

Take a look at sindresorhus' p-queue project - https://github.com/sindresorhus/p-queue - which is about 40 lines of code that take care of queueing exactly the way you want. And it's easy to modify if you want to reject the whole queue in case of an error, which is something that I've done myself in the past. You can see them discussing the possibility of that feature right here - https://github.com/sindresorhus/p-queue/issues/29

Long story short: Your problem is with ordering, not locking.


Even the original solution is a lot more complex than just checking some primitive value. A lock is still needed when doing concurrency on a single thread, which is the point I'm making anyway.

> Long story short: Your problem is with ordering, not locking.

Look up the definition of a concurrency lock.


You moved the goal posts on me. But the solution for your next scenario is actually pretty trivial though isn’t it?

You don’t really need what you call locking in a single-threaded world. You just need a state machine.


No, I didn't. That's how a lock works. Only one coroutine (or thread) is allowed to hold it at a time while the others must await their turn. A state machine is a lot more than a boolean variable.


Both of your statements are incorrect. Locks are an abstract concept that have many different behaviors. In a multi-threaded world (not JS) a certain type of synchronization lock, such as a single-writer/multiple-reader locking mechanism would specifically allow you to unblock all waiting coroutines. Another type of lock might only allow the first waiter to unblock.

There simply is no concurrency in JS though, so there is no need for locking. There is no way that you would ever "unblock all coroutines waiting for the promise" because you just can't run more than 1 coroutine at a time. So your problem is with ordering, not concurrency and locks are a solution for concurrency.

And the simplest state machine is a boolean variable.

Anyway, I guess if you still disagree then you can go and ask the people who build JS engines why they don't want to add locks. Maybe ask the v8 team - they add stuff that isn't in the spec all the time and they haven't seen a big need for this, so they must know the answer...


First look up the definition of a lock and mutual exclusion. There are two types of basic locks: mutexes and semaphores. Both keep two threads or coroutines from executing a block of code/instructions at the same time, and both work in a similar way I already described.

> There simply is no concurrency in JS though,

There is no parallelism in JS (well there is with workers now), concurrency is not parallelism.


Show me a piece of code where you think you need locks I guess.

I will concede that I got the two terms confused, mostly because I haven’t had to think about this kind of stuff in years because there’s literally nothing I’ve ever had to do with JavaScript that’s required me to think about it.

But I’d really love to see some code where you can’t handle concurrently waiting coroutine priorities in a single threaded world. Concurrency without parallelism is not really a problem the way I see it.


There are more and more valuable locking mechanisms than a bool. Read-write locks, for example, are still sometimes (not often but I've done it on occasion) valuable in NodeJS.


Only the single JS thread is ever reading or writing a value so what is the lock for?


You get that it's super common to await in the middle of something that could be considered a critical section, yeah? And that doing so will result in a different coroutine (for lack of a better term; that's what Promise-driven code effectively is) being executed until control returns to the awaited Promise within that critical section?


Yes but async/await is for concurrency, not parallelism. Only one or the other "coroutine" will run at any time so what is the lock protecting? There's no way for multiple branches of code to access the same variable at the same time in JS. What else would a reader/writer lock be used for?


Locks are for concurrency. Parallelism is orthogonal.

When you are forced to yield inside of your critical section (a database call, a file write, whatever), as is common in NodeJS, you must acquire a lock that another coroutine can't run through.


He got the terms wrong but he’s absolutely right.

> There's no way for multiple branches of code to access the same variable at the same time in JS.

This is 100% true and it’s the very reason why we do not need locks. There is no thing called a critical section in JavaScript because in JavaScript all of your code runs on a single thread and it’s all equal therefore none of it is more volatile than any other piece code. All that you need to manage asynchronous concurrency is good state management, not “locks”, because there is nothing to lock since there are not multiple threads asking any of your JS code for control of execution at the same exact time. It’s all scheduled for you onto your single thread.

With that, what you don’t get in JS of course, is shared-state parallelism.


None of the concerns of concurrency go away because you don't have threads, they go away when you have no multitasking, and NodeJS is a cooperatively multitasking environment (modulo a few minor asterisks).

So of course there are still critical sections--a critical section is a set of operations that must have uninterrupted access to a resource in order to maintain a desired level of consistency. If you have to yield (that is, `await` on a Promise or wait for the invocation of a callback), which sometimes you do have to do (i.e., you have to do IO, you've got a crunchy computation running in C++ outside of the JS loop, whatever), then you need to prevent access to that resource from whatever is being picked up next by the JS runtime while the first user is waiting for that IO to finish or that computation to come back.

To that end, locks are that "good state management" to which you refer. Most of the time one can use a mutex--the `async-lock` library is common--but I've even encountered cases where resource management must occur in structured forms and it was necessary to implement a read-write lock, where multiple readers can operate in tandem but all readers must close out before a writer can take control.

The basics don't change when your multitasking is cooperative rather than preemptive--all the same stuff still applies. This is not a controversial set of assertions. I don't understand at all where you're coming from.


Your note about 'async-lock' NPM package clears up the confusion. That's used to serialize operations to external systems, with their readme showing an example with multiple Redis calls within await blocks setting the same key incorrectly.

This conversation was talking about access to variables and data within Javascript code itself. Any language is open to data races when dealing with external systems. I can see why having a `lock` construct can be helpful here, but it seems extraneous when the actual language doesn't support concurrent access and simple packages already exist. The 'async-lock' is just a wrapper around a few variables, and only works precisely because JS is already single-threaded and serialized.


When you have to bring C++ into the conversation to make your point, then it sounds like it’s a problem for C++…

There simply is no concurrency in JavaScript, so lock primitives are not solving any problem that JavaScript has.

Can you give an example of some code where you think locks would help you?


I don't think a critical section is necessarily related to parallelism, but concurrency (As shown in other examples in this thread). If you google for "critical section", the first link (Wikipedia) begins "In concurrent programming,".

Regardless, if you were to have need of a lock, which was the basis of this chain of comments unless I'm missremembering, a read-write lock could improve performance, regardless of things being singlel threaded, precisely because of things being async (If you have N async tasks all read-only locking an rwlock, they can go on to fire async requests, which will run concurrently, despite the execution being single threaded).


Nothing really, but you'll still want be sure the lock is released correctly.


JavaScript uses non-preemptive multitasking. It's trivial to write locking primitives. Indeed there are already many: https://www.npmjs.com/search?q=mutex

You could even have a method decorator (https://github.com/tc39/proposal-decorators) that emulates Java's "synchronized" keyword.


Isn't every new language doomed to relearn everything in some way?


Not everyone uses languages designed by people with no place designing a language and taken from there. Not every language is PHP, JavaScript, C++, et al.

Take a look at a Lisp, APL or a derivative, or Ada for examples of languages designed by people who knew what they were doing. Lisp grew in universities under the direction of hackers and there are several standard dialects now. APL was designed by a mathematician originally as a teaching aid. Ada was designed in a contest by the US DoD after spending a long while collecting requirements and it was then fully specified before any implementation work was done.


> by people who knew what they were doing.

The Great Old Ones did not have access to any lost mystical knowledge. The Lisp inventors hadn't even sorted out variable scoping leading many Lisps to have dynamic scoping which is now widely agreed to be the wrong default. Also, the Lisp-1 versus Lisp-2 split. These were clearly people that were figuring it out as they went along, which they would be the first to admit.

> standard dialects

An oxymoron if there ever was one. :)

> APL was designed by a mathematician originally as a teaching aid.

...and it's virtually dead. There are many interesting ideas in there but it doesn't seem like Iverson's ideas about what is a desirable notation for thought resonated with any significant fraction of people. There were many years where APL was a standard part of a CS degree. With that much forced contact, you would expect it to have stuck around if it had so much going for it. (See: Pascal and BASIC, which punched above their weight in large part because of early introduction in schools.)

> Ada was designed in a contest by the US DoD after spending a long while collecting requirements and it was then fully specified before any implementation work was done.

Ada also has a lot of interesting ideas but... outside of the DoD, it's not exactly doing well either. It turns out that spending several years writing an enormous specification without regard to implementation leads to... unbelievably complex, expensive implementations that take years to reach the market. Who could have predicted that?

The economic cost of implementing the language is a salient part of its fitness. It doesn't matter how nice the language is if you can't actually use it because you don't have a working compiler.

C++ is only bad to the degree that you can completely wish away path dependence. (Hint: You can't.) There were many languages better than C++ designed at the time, but they didn't take off because they lacked the features we hate in C++ today. Those features existed as a path to lead the existing C programmers to C++. Meanwhile, other purer languages floated off in space, beautiful but unreachable by mortals.

JavaScript was just a sad rush job that we're all unfortunately stuck with. I agree with you on PHP.


The Great Old Ones did not have access to any lost mystical knowledge.

They at least had the advantage of not being forced to design a language for a failed startup, to be easy to learn where that's defined as resembling C, and other asinine considerations people continue to take into account nowadays.

These were clearly people that were figuring it out as they went along, which they would be the firt to admit

I'm not claiming otherwise, but the birth of Lisp was a paper written by an AI researcher that had good thought poured into it, not something meant to look like C and be good enough for whatever its purpose was on a strict time schedule.

...and it's virtually dead.

Why would that matter? Why does every language need to appeal to many people? Why is that a desirable quality? There's still plenty of APL being written. I even have an article concerning this: http://verisimilitudes.net/2019-08-08

Who could have predicted that?

Ada was designed to be implemented efficiently and understood easily. It's harder to write a good C or C++ compiler than it would be to write a good Ada compiler; there's simply fewer people doing this and GNAT is popular. The only compiler I'm aware of for C that anyone actually uses, sans the behemoths of GCC and Clang/LLVM, is tcc, and almost no one uses that. There's more Common Lisp implementations that are actually used than there are for C and the only reason you'd see more C compilers for Ada is because it's a smaller language with a standard that lets the implementor get away with doing so little at the cost of every program written in the language and because the standard for a C compiler is so low that it's expected to have bugs.

The economic cost of implementing the language is a salient part of its fitness.

When you start judging something in computing by its fitness, it reminds me of a virus, and that reminds me of The UNIX-HATERS Handbook and its description of C and UNIX.

C++ is only bad to the degree that you can completely wish away path dependence.

I don't know what you mean by this. Common Lisp doesn't care how code is loaded and Ada has a nice with and use system.

I do know C++ is a gargantuan language where the very grammar is ambiguous and is parsed differently by different compilers and I'm also aware it has little in static analysis due to this. Debugging Common lisp is simple due to its interactivity and Ada was designed to be statically-analyzed.

Part of why I mentioned APL, Lisp, and Ada is because I use all three of these languages. I could care less what the masses use. Do racecar drivers care about the people driving vans? Vans are more popular than racecars. Why should computing be different?


There are at least three other C/C++ implementations - AMD and Intel each have one, and MSVC. That's a total of six compilers for the language.


There are at least ten Free Software Common Lisp implementations and three proprietary ones that are still supported. There are four APL implementations, two of which are Free Software. As for Ada, I've read there are six Ada 2012 compilers, with GNAT being the only Free Software option; there are more, however, if you look to older Ada standards. It's interesting how these ostensibly more complex languages have compiler parity with C and C++ or exceed those, isn't it? I think the reason is that it's much harder to optimize C than it is more abstract languages, clearly, and C++ is simply so complex I've seen an entire company dedicated to just writing a parser for it.

As an aside, I find it interesting how many negative Internet points I'm receiving relative to the amount of responses there are. My figuring is many of these people can't argue with what I'm writing.


Software development nowadays is as much about using trendy / hip tools than anything else, sadly.

Especially web development.


Is it though?

The tooling is pretty wild and crazy but I see a lot of tools that really do solve problems.


I'll tell you why these languages like Lisp or APL or Ada aren't actually good languages.

If they were really good languages, people would not only strongly advocate for them, they would give credibility to their advocacy by writing plenty of great programs in them, even on their own time, because they are very productive in these languages. They would also create best-in-class tooling to further their productivity even more.

What actually happens with these languages is that a certain breed of highly opinionated but highly unproductive programmer gets lost in them, on their quest for a level of aesthetics or elegance or correctness that nobody but themselves actually values. They don't build platforms for others to build on, they build pillars to prove a point.


I disagree with your position that a language is good or not based on how people appreciate or use it. You're trying to tell me garbage such as PHP is one of the best languages there is, effectively.

In any case, I can still find examples of these platforms. Emacs is such a platform. For that matter, StumpWM is a platform of sorts. A nice quality of Lisp is making any program extensible with it trivially. As for Ada, governments use it for critical systems and I can't tell you the particulars of those. They use Ada because people will die if the software fails, so they're seeking quality, not quantity. You're advocating for quantity, not quality.

As for APL, you don't build platforms or such things in APL. If you don't see the beauty in APL, then that's your loss.

They don't build platforms for others to build on, they build pillars to prove a point.

First you claimed people don't build great programs in these languages, which I've shown is false, and now you're arguing that they build things, but not things someone can build on. You're conflating extensible software with quality software here.

What actually happens with these languages is that a certain breed of highly opinionated but highly unproductive programmer gets lost in them, on their quest for a level of aesthetics or elegance or correctness that nobody but themselves actually values.

That happens, less so with Ada, but why would that even be a bad thing? Is it wrong for someone to spend time working on what they view as an ideal and finished program, rather than just writing something that's already been done and requires little thought or effort on their part? It's infinitely harder, I think, to write novel programs than it is to simply implement some specification or write a copy or something else. That's what I do with my work. I don't see why you'd look down on that.

I'll let you've the last word, if you want it. I'm not interested in replying in this chain any further.


> I disagree with your position that a language is good or not based on how people appreciate or use it.

This is unsurprising.

> You're trying to tell me garbage such as PHP is one of the best languages there is, effectively.

I wouldn't go that far with PHP, it's only used in a certain (lucrative) niche and I can't think of anything great written in it.

Python is a good language that grew organically much in the way that i described.

> As for APL, you don't build platforms or such things in APL. If you don't see the beauty in APL, then that's your loss.

It really isn't a net loss, because I consider the strive for beauty in a program a fool's errand. I get to write useful things in less time by foregoing that ideal.

> In any case, I can still find examples of these platforms. Emacs is such a platform.

Let's suppose that Emacs is a great program. Most languages that are not completely irrelevant have "that one thing" for people to point to. That's not enough. Furthermore, Emacs consists of about 25% of "ugly" C code.

> For that matter, StumpWM is a platform of sorts.

StumpWM? You're quickly running out of Steam here.

> First you claimed people don't build great programs in these languages, which I've shown is false

That's not my claim. My claim is "plenty of great programs" and "best-in-class tooling". A language like Lisp excludes best-in-class tooling or good performance by virtue of its design, so all programs that are written in it have a disadvantage right off the bat.

Ada had some potential here, but it got stuck. I suppose it had something to do with too much of it being proprietary solutions.

> You're conflating extensible software with quality software here.

I don't mean "platform" in the sense of "program that I can extend". I'm talking about the ecosystem and tooling surrounding the language, its "commons", if you will.

> Is it wrong for someone to spend time working on what they view as an ideal and finished program, rather than just writing something that's already been done and requires little thought or effort on their part?

First of all, this is a false dichotomy. Secondly, a program that is ideal and finished but less useful is a poor trade-off. A program that is less ideal and unfinished but has better functionality as a result is, in my view, a better program. More importantly though is not just the one program, but all the programs that can or can not exist, based on the trade-offs you chose.

If you just want to create pieces of art that the very few to appreciate, that's your prerogative. If instead you focus on usefulness however, you can create actual wealth, for yourself and for others.

> It's infinitely harder, I think, to write novel programs...

It is indeed, but it's much harder still to also make them ideal.

> That's what I do with my work. I don't see why you'd look down on that.

I don't look down on it. I just see nothing to look up to.


> All this async code without decent locking primitives is leading to a rabbit hole of race conditions...

That was an issue long before async/await was a thing.


Plenty of people replied to counter this misinformation, so why is it still at the top?


Too many C devs upvoting it, I guess.


No, the abstraction is much better than C in 90s.

It comes from functional programming concepts (namely monadic design - which makes Haskell the 'best imperative language') which has some nice algebraic properties. It's easy to reason about, and don't really need lock machanism above the abstraction if properly used.


Have C programmers already learned how to do memory management properly? Apparently 50 years have not been enough.

Don't assume all programming languages have the same design flaws as C.


A good editor warns about some of those conditions at least.


It's best to use the phrase 'async control flow' instead of 'race conditions' when you're talking about single threaded execution.


This is huge! Finally no more need to use IIFE's for top level awaits


It's nice, I guess, but huge?

Instead of:

    async function main() {
       // code
    }
    main().catch(console.error);
I'll be maybe writing:

    try {
      // code
    } catch (ex) {
      console.error(ex);
    }
Hrm?


Top level await does more than remove a main function. If you import modules that use top level await, they will be resolved before the imports finish.

To me this is most important in node where it's not uncommon to do async operations during initialization. Currently you either have to export a promise or an async function.


Do we really want slow imports though? If you have a bunch of modules with async setup functions, would you not be able to Promise.all() them?


That kinda stuff is typically an antipattern in C#; an async static "factory method" would be used(I use this pattern myself in Typescript). But I guess JavaScript has odd stuff like code chunking so the importau be pulling remote code and etc.


Sure, in many situations, but I guess if you need to setup things have have dependencies (init A then init B then init C) this will help.


How so? I'm not seeing the benefit over exporting A, B, and C as functions, and then putting them together in another spot (like a composite root for pure DI, or an IoC container, etc).

Is the argument for top-level await that you don't need the other spot? Because I feel like you still do - except now it's implicitly inside not only A, but likely B and C as well to some extent. And in a very inflexible way.


No one should be asynchronously executing code on import? I'd rather my code call a function to kick it off.


Please, this.

Synchronous effectful imports are already are the source of so much frustration, and now we're adding asynchronicity to it?

Just export a init function and let your users choose when/where to run your effectful initialization code, sync or async.


Oh that's interesting if this is the case. So now a module export can contain an asynchronously initialized db handler for example?


Please don't do that though!


For some use cases, the difference can be a little more dramatic.

    fetch(allResourcesUrl).then(async response => {
      let allResources = await response.json();
      let pageResource = await (await fetch(allResources.page1.url)).json();

      // set up page with pageResource
    });
can be replaced with

    {
      let allResources = await (await fetch(allResourcesUrl)).json());
      let pageResource = await (await fetch(allResources.page1.url)).json();

      //set up page with pageResource
    }
Note use of a top-level block instead of a function context to hide local variables.


What's with the `let` though?


let plus the curly bracket scoping means that the variables only exist within the curly brackets, and not the global state.


Maybe he meant why not const?


You would use const, unless you wanted to reassign the variable later in the same scope.


Correct. Example doesn't show modification of the variables later, better to use const.


Const is a complete waste of time for non-primitive types. I defy anyone to show me a single bug in a popular program that could have been prevented by using const for a function local object. It can’t be done. There are bugs caused by mutatable state. There are bugs caused by reassigning globals. There has never been a bug caused by reassigning a function local variable while leaving it mutable.


Using the more restrictive construct until you actually need additional features (like identifier rebinding) is just engineering 101.

I think the onus would be on you to prove that using a less restrictive concept is worthwhile because it saves you two keystrokes. That, in contrast, seems like the opposite of good engineering. Like using classes over structs because class is shorter to type.

The more I think about your post, the more absurd it becomes.


I think it can be argued both ways. I've seen a large TypeScript codebase with a pre-commit hook that enforces use of const on all non-reassigned variables. With that being done everywhere, it made for easy reading - whether a variable was being reassigned or not effectively became annotated in its declaration.

On the other hand, there are some good reasons not to use use const everywhere we can. Paul Sweeney lists a few here: https://medium.com/@PepsRyuu/use-let-by-default-not-const-58...

I'm not sure where I land yet. Perhaps it's a decision to make separately for each codebase.


I don't think I'd call any of those reasons good; the main thrust of them is "const doesn't do everything, so don't let it do anything". If that line of reasoning is appealing, then one might as well continue using var.


The main benefit of let/const over var is that it's block scoped. The benefit of const over let is pretty much nothing, much. It only prevents some limited form of re-binding. You can still easily re-bind const variables from inside a function:

    const x = 1;
    (function() {
        const x = 2;
        console.log(x);
    })()
or from an argument:

    const x = 1;
    (function(x) {
        console.log(x);
    })(2)
So it has limited value in any code that uses closures.


Shadowing isn't rebinding?

If you've declared const x = 1, then that will hold for your scope?

If you go into a separate scope... Well, then you're in a separate scope?


Neither of those rebind the `const x`, which will continue to hold the value it was given.


The const keyword has absolutely nothing to do with state mutability, that is a misconception. const in es6 is a keyword meant to explicitly prohibit identifier reassignment and it does indeed reduce bugs because it communicates developer intention regarding how a variable reference is expected to behave in a section of code. Further, use of const by default is a best practice because it leaves less room for error in cases where a reassignment must never occur and increases readability by convention of the let keyword signaling that a reference will behave in a volatile way in proximal logic.


My whole argument was premised on const not having anything to do with mutability.


By waste of time do you mean performance or coding time? I feel like writing a character more is not a waste of time, if you are just following the convention that anything that is not going to be muted should be signed explicitly. It's not to prevent bugs, but for readability and ease of use.


Time deciding if you can/should rebind a variable or not is a waste of developer time.


You will know ahead of time whether you're going to be rebinding a variable. The use of const is a hint to the next person reading your code, since they won't be privy to your thought process.

In an ideal world, const would be the default, and you would have to opt into non-const.


Then when you will have to change or debug the same code instead of writing new, you will have to spend more time understanding what variable is the source of a state change, for example inside a loop. That is more of a waste of time to me.


I sort of agree; I solve this by never using let and never rebinding :) No need to think about it at all.


Then you end up playing this game of jumping through hoops to omit variables. It's a tempting waste of time because it feels productive.


Strangely it turns out that rebinding variables is a really uncommon need, so there's pretty much zero game playing.


> There are bugs caused by mutatable state.

> There has never been a bug caused by reassigning a function local variable while leaving it mutable.

The latter _is_ mutable state. const doesn't prevent all mutations, but it does prevent some, while let prevents none. I'll take the limited protections of const (with awareness of its limitations; many examples of let are of confused devs trying to avoid making their objects immutable).


I'm with you. Though typing `if (a = 1) {}` happens to me sometimes, and const may catch this earlier. I don't use a linter anymore, but I suppose any serious linter would warn about assignement in a condition, anyway.


Why have you stopped using a linter?

The popular airbnb eslint rules[1] require the use of const where a variable is not reassigned, which I've occasionally found handy.

[1] https://github.com/airbnb/javascript#references--prefer-cons...


If all you're doing on catch is forwarding to `console.error`, you could just write:

    // code
Outputting errors to console is the default behavior.


There's an important difference: the process will crash if you don't catch the error.


While this is 100% correct, if the main function has returned then the process was going to end anyway (assuming there isn't additional code after the call to main). If there's a main loop, then the catch needs to be inside that loop, not outside of main.

The only difference it will make here is to suppress the default stack trace[1] and errorlevel returned to the shell. If you are writing in this kind of "scripting" style, you probably don't want to suppress errorlevels, so leaving out the catch is not only simpler, it is safer.

[1] You may get a stack trace with an Error object, but not the default one from the Node process.


> While this is 100% correct, if the main function has returned then the process was going to end anyway (assuming there isn't additional code after the call to main).

That's not necessarily the case (or maybe poorly worded), e.g.:

    async function main() {
      // code
      setInterval(() => console.log('hey'), 1000)
    }
    main()
In real life, it would probably be a HTTP server holding up the process. It's true that you likely want to crash hard if you have unhandled error during server initialization but I would still catch the error because Node.js will otherwise print a bunch of ugly "UnhandledPromiseRejectionWarning" messages. E.g.:

    main().catch(err => {
      console.error(err);
      process.exit(1);
    });
With top level async landing in v8, I guess Node.js will eventually stop printing those warning messages.


You raise an interesting point. If you are doing anything non-trivial, you should definitely be using a main loop and catching errors inside that loop. You really don't want to rely on the default behaviour of orphaned timers, because that can get interesting.

I did a quick test just to see how bad things might get if you were to rely on orphaned timers like this. The important thing to keep in mind is that each of those timers is effectively its own thread, which can throw errors and terminate itself. But if those are orphaned outside of a runloop that can handle those errors, they will not be caught by the catch outside of main (which only catches errors thrown in the "main thread"). Rather, they become top-level, uncaught exceptions, which also terminate the process.

Here's an example illustrating what happens:

    async function main() {
        setTimeout( () => console.log( 'Hello' ), 3000 );
        setTimeout( function() { throw new Error('error 1'); }, 1000 );
        setTimeout( function() { throw new Error('error 2'); }, 2000 );
        throw new Error('oops');
    }

    main().catch( x => console.error( x ) );
The result here is that `oops` is displayed (the caught error from the "main thread") and then `error 1` is displayed, but the process is then immediately terminated. Neither `error 2` nor `Hello` are displayed.

Note that I use the term "thread" loosely, in the "green thread" sense. Hopefully the meaning is clear.


One thing that has bitten me is when you are using vm.run* and friends, it's impossible to get a result of it if you need to use async calls. Hopefully this solves that case.

There is also the original use case that the original proposal of needing to do async calls when during module loading.


I think the boon here will be for scripts that run once. Say I just want to query a few tables. Now, I no longer need a `main()` wrapper:

  const result = await db.events.find();
  console.log(result);
That's my whole script


Or my preferred (gross) version:

    void async function main() {
       // code
    }()


Mine all tend to look like

  (async () => {
    // code
  })().catch(console.error)


Wow, I had completely forgotten the existence of void in JavaScript.

Why is it needed here? To force an expression without using the parentheses?

Edit: Yes, it seems so, and this usage with functions is explicitly documented there: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...


Yep. Exactly why. Seems silly since it doesn’t really care for types in the first place.


You forgot the IIFE


Seems convenient. Though I've never thought of top-level async as the killer feature I've been waiting for out of V8. Unless I'm missing something about what this enables...


Though it goes even beyond that.

It not only works for the main entrypoint module, but all other modules as well.

Though I can say I've ever encountered a reason to have that....


What is an IIFE?


Immediately invoked function expression - aka an anonymous function that executes as it is interpreted.

    (function() { console.log('test'); })();
In old versions of JavaScript, variables had to be scoped to the nearest function (rather than the nearest block) so IIFE where often used for namespacing:

    var $ = (function() { this.version = '1.2.3'; })();
    $.version; // 1.2.3
This worked because the function would be immediately invoked upon interpretation, scoping it's contents to itself (aliased by the variable name).


Hate to be a wet blanket, but that second code sample won't work. The function doesn't return anything so $ === undefined. Even if it did, this in that context is bound to the global object, so is the same as setting a variable without var or let.

A proper example of IIFE for lexical scoping would be something like this:

     var numbers = "";
     (function(){
         for (var i=0; i<5; i++) {
             numbers += i + " ";
         }
     })();
     console.log(i) //undefined
Nowadays of course you could just use let instead of var

Your code snippet would kind of work as expected if used as a constructor. As in:

    var $ = function() { this.version = '1.2.3'; };
    var obj = new $;
    console.log(obj.version); //1.2.3
Of course that is not a IIFE.

Ah...brings me back to my SO days...


Hate to dry off the blanket but they just forgot the `new` keyword for `this` ;)

    var $ =  new (function() { this.version = '1.2.3'; })();
    console.log($.version); // '1.2.3'
EDIT: Or if you really want to abuse the spec

    var $ = (function() { if (!(this instanceof arguments.callee)) return new arguments.callee(); this.version = '1.2.3' })();
    console.log($.version); // '1.2.3'


lol. But yes, I do really want to abuse the spec. You could also do this:

    var $ = (function(){ this.version = '1.2.3'; return this; }).call({})
    console.log($.version); //1.2.3
But that's not quite as ugly so I guess I lose some points.


Immediately Invoked Function Expression;

(function() { alert('swiggity swooty'); })();


Also, I believe TypeScript uses these to transpile classes with public and private properties/methods (can't verify that's still case atm).

It's a really neat pattern imo, I remember it fondly from my JS heavy days :)


No, while this is sometimes used for private properties/methods, TypeScript never used it. TypeScript's private properties/methods are just compile-time errors, they're still public in the generated code.


You're right[0], which is weird because they got most of the way there. I guess I just assumed since they were leveraging IIFEs this would be a natural use case. I'd be interested in knowing why they didn't actually.

According to this[1] exchange on Stack Overflow, the IIFE is used by TypeScript because of other scoping issues (specifically protecting class properties before instantiation and defining interfaces).

[0] https://yakovfain.com/2015/06/30/the-private-in-typescript-i...

[1] https://stackoverflow.com/questions/56086411/why-does-typesc...


It's a useful pattern that I used a lot, though I don't know if I would agree that it's neat. I definitely prefer the explicit annotations or even the python underscore convention.


Immediately Invoked Function Expression: (function () { statements })();


An easily Googleable acronym.


he should have said IIAFE actually (immediately-invoked-async-function-expression)

  (async () => { console.log(await 'hello world') })()


All async functions are functions, so not really.


So COMEFROM is now a first-class feature of the most popular programming language in the world. Intercal really was ahead of its time.


I'm not sure I quite see the analogy to COMEFROM.

The "problem" with COMEFROM is that at the "target" location (the one control comes from) there is no in-source indication of the control flow transfer. So what looks like linear code turns out to have this unexpected detour to the location of the COMEFROM instruction. This hinders understandability of the code.

"await" in JS doesn't have that problem: there is an explicit control flow operation, in the form of resolving a promise, that eventually transfers control to the location of the "await" call. And even then, it's async from a run-to-completion perspective, so doesn't affect linear code, modulo other await calls. I guess the concern is that you could have linear code that calls a function, which does an "await" and you would effectively have a control flow detour that's hidden from view? In that sense, I guess this is sort of like COMEFROM... At least you have to explicitly opt in (via "async function", or async module) for it to be a problem.

Anyway, a better analogy from my point of view is that async/await is a very limited form of call-with-current-continuation or so. And with generator functions, JS already had that sort of, but without some of the nice ergonomics.


> I guess the concern is that you could have linear code that calls a function, which does an "await" and you would effectively have a control flow detour that's hidden from view?

That's the main issue. The overall control flow cannot be known until the entire abstract syntax tree is generated.


To be fair, that's a problem with GOTO as well (which JS doesn't have, yes); you don't have to go all the way to COMEFROM to get that....

That said, with JS you can't know the overall control flow even one you have the AST, because that `foo()` function call could go anywhere depending on what people did to the global scope independently of your AST.

Come to think of it, you can't even determine control flow from the AST in C, unless everything involved has static linkage...


Could you elaborate?


COMEFROM began as a joke comtrol-flow construct, but the principle turns out to have a deep semantic value:

https://en.wikipedia.org/wiki/COMEFROM

Aspect-Oriented Programming is a kind of COMEFROM. The proposition in the comment is that await is another.


Just like Lisp and Algol 68 on other domains, apparently good features tend to take time to become mainstream.


It's a good point, and I remember even some fairly "serious" comp sci people saying COMEFROM was less of a joke than it originally seemed.


I think async/await probably makes more sense in a typed language, where a compiler can tell you when you're missing an await, or at least warn you about not dealing with potential side effects and error handling. For something like JavaScript, it'd make more sense to me to have the runtime always and implicitly await the result of async functions, and instead make developers explicitly say when they wish for the result to be async. For example, instead of:

    const data = await fetch()
    push(someData) // async, runs in the background
You would do:

    const data = fetch() // Runtime detects promise and awaits result
    async push(data) // the async keyword would return a promise and execute the push function asynchronously, allowing the next line to execute
In this fantasy world the "await" keyword would work anywhere and as you'd expect – awaiting the result of any promise:

    const data = fetch() // implicitly await
    await async push(data) // this would also be "synchronous" in that it suspends execution of subsequent code until `push` fulfills or rejects the promise, and so it'd have the same effect as implicit await
Point is you'd probably await that promise elsewhere, so you'd actually store away the return value of the `async` call and later on you'd `await`, or another example would be to await a block.

Promise rejections in implicit awaits would halt execution, just like a sync function throwing an error, so you wouldn't "miss" an error somewhere because runtimes swallow promise rejections. (Well, at least Node wised up eventually.)

This means there'd be no difference in function declaration between sync and async functions, it'd be determined by whether they return a promise or not which I think should be possible to statically determine by a JIT compiler in most cases, so not adding too much (if any) overhead.

Kind of a half baked thought, but point is I always felt the async/await thing was kind of backwards in JavaScript.


Promises are nice in JavaScript because they’re just a value with no magic. Anything can create them, not just async functions. Generic/higher-order functions and so on can get involved without needing to know the difference. A proposal to introduce magic at calls sounds really awful, sorry.


No magic is rich — they swallow errors, for one. In any case, I don't think anything I said precludes the creation of promises outside of async functions, in fact quite the opposite. The difference is that the runtime would implicitly await the resolution of a promise returned from a function (any function, there'd be no such thing as an "async function") and if you actually wanted things to progress asyncronously you'd simply add `async` before the statement and it'd return a promise (for keeps) of the value of the statement, which could be the return value of a function, or even something as simple as:

    const val = async (1 + 2)
    await val // 3
Maybe it's a genuinely dumb idea, for reasons I can't fathom, but it seems to me this would make working with async code much simpler than it is now, where if you forget an await you've probably introduced a bug, and the only way to know if you need await is to peruse docs (hoping that they're accurate) or judiciously sprinkle it everywhere.


How is swallowing errors magic? try/catch does that and predates ES3.

To clarify your example, what about:

    function foo() {
      const val = async (1 + 2)
      return val;
    }
    const x = foo();
Does the `x = foo()` block/implicitly await?

> if you forget an await you've probably introduced a bug, and the only way to know is ... docs

Given that async is "contagious", I'm having a lot of trouble imagining a scenario where a codepath could run and appear to work, but actually be hiding a bug because you didn't realize something was async. Unless you're just saying that the bug would be apparent as soon as you ran the code, but the syntax checker wouldn't flag it for you?


But try/catch makes swallowing errors explicit, you have to catch it to swallow it. With promises it's the opposite situation, errors will be swallowed unless you explicitly handle the rejection. At least the runtimes have grown up to show console output when there's an unhandled rejection, but man it was pretty dark for a while.

> Does the `x = foo()` block/implicitly await?

Yes, that's what I'd expect, because `foo` returns a promise.

> Unless you're just saying that the bug would be apparent as soon as you ran the code, but the syntax checker wouldn't flag it for you?

It may or may not be apparent even from running the code, consider the following:

    let data
    try {
        data = readFile('some-file.csv', 'utf8')
          .split('\n')
          .filter(l => !!l)
          .map(l => l.split(','))
    } catch (e) {
        data = []
    }
It's not particularly good code, granted, but it's also not a contrived example. What's the bug? Well, assuming `readFile` is sync, and `some-file.csv` is a nice csv file with no surprises in it, this should make data an array filled with data from said file. If the file is empty, it's just going to yield an empty array. If the file can't be read for some reason and `readFile` throws an error, it'll be an empty array.

Now make `readFile` return a promise instead and `data` will always be an empty array, regardless of whether the file exists, can be read, or otherwise all conditions for a happy path are met. Why? No `split` method on the promise.

The problem isn't promises of course, it's the fact that we're doing too much in a try clause, or at least not dealing with specific errors properly. But let's be honest – who hasn't seen (or even written?) code like this in the past?

It might've even worked fantastically well for a long time, years even, until someone comes around and changes `readFile` to be async, for reasons, and now it breaks in a subtle way and it's fun and games trying to find why `data` is always an empty array even though it seems everything should be fine. Actually, it doesn't even have to be `readFile` that changes, it might be a function that it depends on, causing bugs further up the call stack. It happens, no matter how semantic our versioning is.

If however the runtime would always implicitly await return values, and you'd have to explicitly mark statements as async to break out of that, then this code would continue to function regardless of whether `readFile` or one of its dependencies returns a promise or the actual file contents, because the runtime would deal with it. As it is now, you have to go and change all code that calls `readFile` to be async, so you can await, meaning anything further up the stack also needs to be async, so you can await. It's a bit of a foot gun I think.

In any case, it's just a thought and at best a half baked one at that. I just find the async/await semantics to be backwards, and while I've used it for a few years at this point I still keep running into dumb situations like the above. Maybe I'm just a bad programmer, I'm certainly not excluding that as a possibility. :o)

(Apologies for the wall of text.)


> With promises it's the opposite situation, errors will be swallowed unless you explicitly handle the rejection.

Yes, I've been bitten by this too, it's certainly a drawback with the design of Promises. I wouldn't describe it as magic though, since both the implementation and the impetus are easy to understand.

> Actually, it doesn't even have to be `readFile` that changes, it might be a function that it depends on, causing bugs further up the call stack.

I don't think it's possible for readFile to become async without at least some change to it (possibly just adding 'async' and 'await' keywords, but at least some change), except maybe a rare case of a tail call.

> even though it seems everything should be fine

In what way would it seem that everything should be fine? If readFile() were changed from sync to async, wouldn't every single call to readFile() in the entire codebase need to be changed, just like if readFile() were changed from returning a string to returning a File object? It's not like most or even any at all of the calls to readFile() wouldn't need changing, then I could see how it might seem like everything would be fine.

> breaks in a subtle way

But this isn't a subtle bug, it completely breaks as soon as readFile() is changed from sync to async, right? No testing, automated or manual, of this codepath would work at all after the change, right? It's not like a cursory smoke test of this codepath seems to work fine, then I could see how the bug could seem subtle.

> I still keep running into dumb situations like the above. I still keep running into dumb situations like the above

You don't seem like a bad programmer, which is why I'm skeptical of the example you gave.


> I wouldn't describe it as magic though, since both the implementation and the impetus are easy to understand.

That's fair, magic may have been a bit hyperbolic.

> [...] just like if readFile() were changed from returning a string to returning a File object

But that would change the semantics of the function, it literally changes the return type. My point is that adding `async` really just changes the meachanics of the function, not the semantics. If my function returned a string before, and I add `async`, it'll still return a string; just eventually. As a caller, I don't really care, I just want the darn string.

ometimes, as a caller, I do care, and that's exactly why I think have the caller decide when to run something async makes more sense. (There's a whole other discussion that could be had here about how JS promises are a poor async abstraction anyhow, but I digress.)

> But this isn't a subtle bug, it completely breaks as soon as readFile() is changed from sync to async, right?

No it's definitely subtle. In the example I gave, the code would use the default empty array value when there's an error reading the file, for whatever reason. For the happy path, it'll work just fine, though it probably wouldn't deal with invalid input very well. Change the mechanics of readFile to async though and it'll always return the default value, even though the semantics of readFile stays the same. It still returns a string, just eventually, but because the code expects a string, it'll always break because it gets a promise instead. Add `await` and it'll be fine, but now whatever function that code is in is async, and whatever function calls that needs to also `await`, ad nauseam.

> You don't seem like a bad programmer, which is why I'm skeptical of the example you gave.

Hey, thanks! :o)

To your point though, it's definitely representative of the kind of code I come across on a regular basis. Many a times have I had to help colleagues debug this kind of issue, and many a times have I shot myself in the foot in similar ways. In any case, JS async/await semantics are set in stone now, and it's probably too dynamic a language for something like implicit await to work (performantly) anyhow, as previously mentioned.

I appreciate you taking the time to discuss, it's nice being challenged on the actual topic, without it devolving into ad hominem nonsense. There are still good corners of the internet after all!


> and the only way to know if you need await is to peruse docs (hoping that they're accurate) or judiciously sprinkle it everywhere.

Knowing whether what you’re calling is async is part of knowing what you’re calling at all. Sprinkling await everywhere to try to mask the difference is horrifying. (I kind of wish it didn’t work on non-thenables for that reason – half the time, it’s a bug.)


But it can change under your feet – someone might change a function to be async, that used to be sync, and now your code is broken. The code does the same thing, it's just that it turned from returning a value to returning the promise of a value, and now your code is broken. Maybe it's not the function you're calling, but a function further down the stack, that you don't even know about.

It may be that the docs are bad and don't even tell you it returns a promise. Heck, maybe it only returns a promise on Tuesdays, or at random like some other commenter wrote – you'd have to then sprinke `await` there to make sure you're ok, even if most of the time you don't need it.


> maybe it only returns a promise on Tuesdays, or at random like some other commenter wrote

But async/await makes this better, because a function marked async always returns a promise.


> But it can change under your feet – someone might change a function to be async, that used to be sync, and now your code is broken.

Someone might change a function that returns a single value to return an array, and now your code is broken. Someone might rename the function, and now your code is broken. This is the nature of breaking changes, and the same solutions apply.


Yeah but that changes the semantics of the function, whereas `async` arguably just changes the mechanics. The promise itself is not interesting, it's whatever value it (eventually) returns. My point is that a language that had implicit await would let you go on making function as async as you want them to be, and callers would be none the wiser. It'd also allow the caller to decide when to run things asynchronously and even truly defer values, something which JS promises can't do since they immediately execute, but that's an altogether different discussion.


I like your idea, but I don't see how it could work in an untyped language. Consider:

    function foo() { return 1; }
    function bar() { return fetch('http://example.com'); } // implicitly async
    
    function qux() {
      const fn = Math.random() > 1/2 ? foo : bar;
      fn();
      return 1;
    }
Is qux() synchronous?


The function is both synchronous and asynchronous until it is called by the caller. This is called Shannon's Cat.

Jokes aside, I'd like to add that just because a function returns a promise doesn't mean the caller will always want to wait for it to resolve. I think of `await` as a simple modifier that casts the return value from a `Promise<T>` into `T`. By getting rid of the modifier, we can no longer assume that the caller wants to implicitly wait.


You're right of course, and I have zero data to back this up other than anecdotes from my own experience, but still I posit the most common case is that the caller wants to await the return value, not run the function asynchronously. This is why I think it'd make more sense to flip the semantics so you'd have implicit await, and have to explicitly mark the things you want to run asynchronously.


Implicit await would be a nightmare. You could never tell looking at a given block of code whether it yields to the event loop or not, introducing invisible concurrency problems. Even static types wouldn't solve this without looking at every value and function return type.


Would love to learn what concurrency issues you'd see with implicit await, that you presumably wouldn't see otherwise. You're probably right, I just can't think of any examples.


I think there might be some confusion here. `foo()` will run asynchronously regardless of whether or not `await` is specified. I agree that in most cases, you will want to unbox the Promise. It is still very common to pass Promises around as values.


The point isn't so much whether qux() is async or not, it's whether the caller wants it to execute before subsequent statements or not. In the current state of the world:

    const a = await qux()
    const b = qux()
    const c = await qux()
    const d = (await b) + 5
But if I had my little way:

    const a = qux()
    const b = async qux()
    const c = qux()
    const d = b + 5
The weird one is `d` of course, why would it implicitly await there? Because `+` tries to get the value of `b` in order to add `5`.

How the runtime would optimize this scenario I don't know, it could probably statically determine that qux() may be async and therefore determine it would have to implicitly await its return value even if it's just the number. Obviously this is a contrived scenario, but I'd rather pay a small runtime cost, than pay the cognitive overhead of remembering to await things everywhere. Like most people, I forget things...


I think you missed my point. I understood how you inverted explicit and implicit, you saw me explain that to someone else who asked about Promise.all().

What I'm asking you is, how does anyone, be it the programmer, compiler, or runtime, know whether to block on qux() or not, given that it sometimes synchronously returns 1?

Are you saying that in your proposal, it would become part of the calling convention that any time any function at all returns (and no 'async' keyword was used), the callee would check if a Promise was returned and if so, return a continuation promise?

That is no small runtime cost.

In an untyped language like JS, the cases where static analysis could determine whether a function returns a Promise or not are vanishingly few, probably about the same as the cases where a function can be inlined. Remember, anyone at any point can overwrite Array.prototype.push to be anything they want, including a function that returns a Promise. Any function that takes and calls a callback might be calling an async function. Any math or string operation could be calling .valueOf() or .toString(), which might be async.


Making await implicit would make it difficult to manage parallel promises. You would either have to make an exception for Promise.all or add new syntax. It's also not unheard of to have hanging promises (fire and forget).

Types make it easier to catch mistakes early in general. Typescript has a compiler rule 'no-hanging-promises' to help avoid forgetting to await.


I think you missed this part: "instead make developers explicitly say when they wish for the result to be async"

So using Promise.all() would be as simple as:

    const results = Promise.all(async fetch(url1), async fetch(url2))


Now if only python would do the same.


Top level awaits are allowed with a compiler flag now in Python 3.8 : https://bugs.python.org/issue34616

python -m asyncio

Starts a new repl with the compiler flag to explore top level await in a repl


Impossible due to how async is implemented (as generators).


JS async is implemented as generators as well AFAIK


I'm not sure how they're handled internally, but async functions are no longer generators. For a while after generators and before async/await, generators were used as a polyfill.


Because of how similar they are, it would make sense to implement async/await using existing generator backends in the VM, right?


Not unless promises are implemented with generators. I dunno though, I feel like there's some overlap between the two concepts, so maybe they are?


Promises are just objects which represent an asynchronous value.

Async/await are coroutines: they have a shared context, stack, etc. They can be paused and resumed.

The implementation of async/await depends on the engine, but I'm pretty sure they use the same generator backend.

Babel and other transpilers convert async functions actual generator functions. By contrast, promises are a fairly simple runtime library.



Was any of these concerns allayed?


Is this part of the standard?


It's a stage 3 proposal[1], which according to the TC39 process[2] means "the solution is complete and no further work is possible without implementation experience, significant usage and external feedback."

In other words, it's all but standardized. Barring significant blockers coming from actual implementation experience this will most likely be ratified.

[1]: https://github.com/tc39/proposal-top-level-await

[2]: https://tc39.es/process-document/


And the way the TC39 works, it won't progress until multiple vendors can implement it.

Being implemented by multiple js engines is how JS features become standardized.


Yup, thanks for adding that. It's a good process I think, though it is kind of a double edged sword. On the one hand it's nice because it means the standard isn't bloated with a bunch of stuff that no one implements, but on the other hand it also means it's much harder to get rid of stuff that turns out to maybe not be such a great idea after all, leading to bloat anyway.

Still, it's a pretty good process I think.


Did you read the commit? The spec material was linked: https://tc39.es/proposal-top-level-await/#sec-execute-async-...

It's a Stage 3 proposal https://github.com/tc39/proposal-top-level-await


I did. It looked like a diff, and since I don't have much interest in the internals of v8 I closed the tab.

Welcome to the internet!


Finally JS finishes reinventing the benefits of imperative programming.


I was waiting years for this. From now on code will be much more readable and cleaner without all those IIFEs.


This looks like Dart's handling of async/await.


Awesome! How long before this shows up in canary?


Does rust async support this?


Rust async is just syntax: there is no event loop built into the language.


Tokio does with a #[tokio:main] attribute on the main function.


Some async runtime libraries have started offering an attribute on `fn main` to make it async, which is pretty close to the same thing.


[removed]


Nothing would change here.

Normally, await can only appear in a method that has been marked as async. However, when you write toplevel code (code that's not wrapped in a function), you can't mark the function as async (because there is no function) so you can't use await.

Enabling toplevel await fixes that.


Nothing. It's just that you can have

    const result = await someFetchOrSomething(str);
    return result.foo;
while not inside an async function. You still can't use it in a non-async function.


You can't return when you're not in a function ;-)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: