Hacker News new | past | comments | ask | show | jobs | submit login

There's nothing to stop you from setting up synchronization constructs like mutexes in a cooperatively-scheduled application. Kotlin bundles them with kotlinx.coroutines [0]. If anything, having possibly-concurrent methods explicitly marked makes mutual exclusion easier to reason about, because you know exactly which functions may have concurrency problems.

Are you saying that there's a mechanism in Loom that somehow forces you to mark mutual exclusion which is absent from cooperatively-scheduled languages?

[0] https://kotlinlang.org/api/kotlinx.coroutines/kotlinx-corout...




> There's nothing to stop you from setting up synchronization constructs like mutexes in a cooperatively-scheduled application.

Right, but assumptions of mutual exclusion are still implicit, and so you must carefully analyse them any time you add a scheduling point. I.e. code may implicitly rely on mutual exclusion in cooperative scheduling but not in preemptive scheduling.

> Are you saying that there's a mechanism in Loom that somehow forces you to mark mutual exclusion which is absent from cooperatively-scheduled languages?

It's not a mechanism in virtual threads but in Java threads in general. Yes, to mark mutual exclusion you must employ some synchronisation construct explicitly, and that is not the case for cooperative scheduling where mutual exclusion is implicit, and that's what makes cooperative scheduling less composable.


I think we're talking past each other.

Can we make this concrete? In what way is mutual exclusion implicit in Kotlin coroutines that is not also true in Java Loom?

Here's Kotlin's explanation of how to do mutual exclusion [0], and here's a blog post explaining how mutual exclusion works in Java [1]. The main difference I see is that Java's `synchronized` is a full syntax construct while Kotlin uses its trailing-lambda syntax, but that's not semantically significant.

What do you see as more explicit in the Java version?

[0] https://kotlinlang.org/docs/shared-mutable-state-and-concurr...

[1] https://www.baeldung.com/java-mutex


Kotlin is not really a good example because it offers the worst of both worlds in this area: You must account for both preemptive and cooperative scheduling because both occur at the same time (they had no choice because they don't control the platform, only the frontend compiler). But let's look at a language like JS, that offers "clean" cooperative scheduling, and compare that to Java.

An async JS function can contain code like this:

    x = 3;
    foo();
    print(x);
Now suppose that you want to perform some I/O operation in foo, which would require changing the function to:

    x = 3;
    await foo();
    print(x);
But is that allowed or does our function depend on x (which is shared among tasks in some way) not being changed between the first and third lines? Does the original function rely on the implicit mutual exclusion for its correct operation or not?

In Java, on the other hand, if x is shared among some tasks then, regardless of what foo does, if the method depends on x not changing then the code must be written as:

    try (xGuard.lock()) {
        x = 3;
        foo();
        print(x);
    } finally {
        xGuard.unlock();
    }
So adding a scheduling point to foo doesn't matter because if our method has any mutual exclusion assumptions it must express them explicitly (in practice, if x is guarded, then the guarding will often be encapsulated somehow, but the point remains). In JS that's not the case, so the addition of a scheduling point requires a careful analysis of all callers.


Does Java actually prevent this code from compiling if `x` is shared?

    x = 3;
    foo();
    print(x);
If not, then I fail to see what the difference is with your JS example. It's still on the developer to keep track of whether x may change (and therefore whether a mutex is required), with the added disadvantage that there's no `async` keyword to clue you in that there might be a problem.

You seem to be describing a style guide rule that Java projects should adopt, not a language feature, and there's no particular reason the same style guide couldn't be adopted another language if required.


> Does Java actually prevent this code from compiling if `x` is shared?

Why would it prevent it from compiling? Whether it's right or wrong depends on what the programmer has in mind. If they rely on a mutual exclusion constraint, they must specify it.

> It's still on the developer to keep track of whether x may change (and therefore whether a mutex is required), with the added disadvantage that there's no `async` keyword to clue you in that there might be a problem.

No, because in Java, the author of this method has to decide what they want. In JS, when the author of foo decides to change it they need to analyse what the authors of the callers had in mind.

The point is that the presence of a scheduling point or not is not just informative but has an impact on the correctness of the code.

> You seem to be describing a style guide rule that Java projects should adopt, not a language feature

No, it's a language feature. In Java the default is "scheduling point allowed." If you want to exclude it, you have to be explicit. This approach makes composing code easier than the opposite default unless your entire language is built around an even stronger constraint (like Haskell).


> In JS, when the author of foo decides to change it they need to analyse what the authors of the callers had in mind.

No, because they already committed to not having any concurrent code in that method as part of the function contract. Making a function async is a breaking change (the return type changes to Promise<T>). If the author can't avoid the breaking change, then it would indeed force the caller to update their call site, cluing them in that the code may now run in parallel and mutual exclusion should be considered. There's no guesswork at what the callers are doing because you had an explicit contract with them that there would be no concurrency.

If a Java method is passed an object by reference and subsequently decides to modify it in a thread instead of inline, that information is not propagated up to the callers, so they would have either had to guess that the library might change to a concurrent model and make the object thread-safe ahead of time, or they would have to read the release notes and then go find the call sites that need fixing.

EDIT: And again, I'm not critiquing your decision to use this model for Java. On the balance I believe it was the right move, because as you note it's the model that was already extensively used in Java. I'm just arguing that preemptive is not objectively better, it depends on what existing code is already doing and on what you're trying to accomplish with the feature.


> Making a function async is a breaking change (the return type changes to Promise<T>).

Sure, that's another way of showing the same thing, and that is the essence of non-composability or poor abstraction. It's what you want in Haskell because it's designed precisely for that, but very much not what you want in an imperative language. And the reason it's easy to see it's not what you want is that those languages, even JS, don't generally colour blocking functions. In imperative languages, blocking is an implementation detail. Deviating from that principle goes against the grain of those languages (though not of Haskell's).

> If a Java method is passed an object by reference and subsequently decides to modify it in a thread instead of inline, that information is not propagated up to the callers, so they would have either had to guess that the library might change to a concurrent model and make the object thread-safe ahead of time, or they would have to read the release notes and then go find the call sites that need fixing.

In Java (and in any imperative language that offers threading, including Rust and Kotlin and C#) any interaction with data that may be shared across threads absolutely requires explicit attention to memory visibility. In particular, in your example you don't need to communicate anything to the caller, just ensure that the method that passes the object to be processed by a thread ensures visibility. This is what the Java Memory Model is about, and thread pools, futures etc. do it automatically.

It's true that JS doesn't require attention to memory visibility, but that has absolutely nothing with cooperative or preemptive scheduling. Rather it has to do with multiprocessing. A language that offers preemptive threads over a single processor will not need any more attention to memory visibility than JS. Conversely, cooperative scheduling in Kotlin, whose scheduler supports multi-processing also requires the same kind of attention.

> I'm just arguing that preemptive is not objectively better, it depends on what existing code is already doing and on what you're trying to accomplish with the feature.

Well, I'm arguing that preemptive is objectively better in imperative languages unless there are external concerns, and that's why Erlang and Go have gone down that route. I know of one exception to that, although the paradigm isn't quite classical imperative, and that is synchronous languages like Esterel [1], but note that the model there isn't quite like async/await and isn't quite cooperative, either. I am not aware of (ordinary) imperative languages that chose cooperative scheduling primarily because they truly thought the programming model is better; it's lack of composability with imperative paradigms is fairly obvious (and I know for a fact that C#, Kotlin, C++, Rust, and JS didn't choose that style because of that, but rather because of other constraints).

[1]: https://en.wikipedia.org/wiki/Esterel




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: