Go has a mechanism to spawn a new thread (m in ho runtime parlance) if it thinks one of its threads might be blocked in a cgo (go’s “native function” equivalent). That prevents stuff like this.
Java does the same for Object.wait(), only the number of such compensating threads is limited by default, but can be extended via config option. They have exhausted the default number of compensating threads, I think.
And they are mistaken to call this situation a "pinning"
JEP 444:
> The vast majority of blocking operations in the JDK will unmount the virtual thread, freeing its carrier and the underlying OS thread to take on new work. However, some blocking operations in the JDK do not unmount the virtual thread, and thus block both its carrier and the underlying OS thread. This is because of limitations at either the OS level (e.g., many filesystem operations) or the JDK level (e.g., Object.wait()). The implementations of these blocking operations compensate for the capture of the OS thread by temporarily expanding the parallelism of the scheduler. Consequently, the number of platform threads in the scheduler's ForkJoinPool may temporarily exceed the number of available processors. The maximum number of platform threads available to the scheduler can be tuned with the system property jdk.virtualThreadScheduler.maxPoolSize.
(In my testing the default ForkJoinPool limit was 256)
So theoretically they could have extended the jdk.virtualThreadScheduler.maxPoolSize to a number sufficient for the use case. Although their workaround with semaphores is probably more reliable - no need to guess the sufficient number.
The situation with Object.wait() is not what JEP 444 calls "pinning". The "pinning" happens, for example, when one calls `syncronized(....) {blockingQueue.take()}`, which is not sane coding, BTW. In this case the native thread is blocked and is not compensated by another thread - much worse than the Object.wait(). The number of native threads that run virtual threads is equal to the number of CPUs by default, so "pinning" immediately makes one CPU unavailable to the virtual threads of the application.
All those issues are temporarily, as I understand. The JDK team works for fix Object.wait(), synchronized, etc.
> The situation with Object.wait() is not what JEP 444 calls "pinning". The "pinning" happens, for example, when one calls `syncronized(....) {blockingQueue.take()}` [...]
To call Object.wait() you need to own the objects monitor, which would imply that your code would actually look like `synchronized(....) {Object.wait()}` in which case you would indeed be pinned.
As I read JEP 444 (starting from the quote above and several following paragraphs, ending with the words "As always, strive to keep locking policies simple and clear."), the term "pinning" is when a blocking function, that normally unmounts virtual thread, does not do so, due to being called from `synchronized` or from native code.
That's different from blocking functions, described in the quote, that does not even try to unmount virtual thread. Like Object.wait().
Pinning is worse than those functions, because the functions compensate for a blocked native thread by adding one more native thread to the pool.
So does C# with active blocking detection (which injects threads to counteract this) and hill climbing algorithm to scale threadpool threads automatically.
It used to be the case - before .NET 6 there was only hill climbing so poorly written blocking code could starve threadpool very quickly (for + Task.Run + Thread.Sleep and the like), but since 6 blocking threads in such a way makes threadpool inject more threads without going through hill climbing mitigating the impact much more effectively. This does not mean such code should not be fixed however :)