This looks really clean. It's amazing the progress Java is making recently; vastly improved async with Project Loom finally released, new FFI, and soon there'll finally be value types with Project Valhalla.
Check out "Project Amber in Action" [1], String templates [2], Unnamed Classes [3], etc etc. Even considering the critical importance of compatibility, the language has come a long ways and is still going.
Backwards compatibility prevents from removing constructs that have not aged well. But the (implicitly recommended) way of programming in Java is with an IDE. IntelliJ will softly shame you into getting rid of older classes and constructs (like getting away from date mistakes in the jfk, convert c-style for loops into implicitly parallelizeable streams, etc.)
Wanting Java to have better syntax does not mean that it is intolerable. I use Java more than I use any other language. I have a lot that depends on it. It would be really nice to have a new version of Java that rethinks the syntax from the ground up now that we have lambdas and generics. There is a lot of weird corners that make it hard to write code that would make sense to anyone not familiar with the reasons why we have all of these special rules stacked on top of each other.
Kotlin, like Scala, strikes me as too similar to Java to really be compelling. Contrast that to Clojure which is an entirely different language that takes advantage of the JVM.
One of the little known cool and new things here is the manual memory management APIs they've added. They give you a lot of upgrades over malloc/free, but with good performance:
- Bounds checks, we know about those. The JITC can move them around to reduce their costs.
- Temporal safety is new. It means you can't have crashes or read bad data due to use-after-free bugs, even though deallocation/unmapping is explicit and not GC controlled. It's efficient (see below).
- Thread safety. You can allocate memory that can't be shared between threads (confined segments).
- You can create customized memory allocators (arenas)
- You can allocate memory and give it to another piece of code, without that code being able to free it. This lets you communicate memory lifetimes via the type system.
- You can wrap memory allocated by external libraries to give these allocations these same abilities.
This isn't a set of compile-time guarantees like what Rust gives you, it's still Java, but it does provide the usual guarantees Java developers expect, like buggy code can't crash the VM.
It's also cheaper than you'd expect at runtime. The JIT compiler understands how to eliminate redundant bounds/free checks, so code that looks like it'd do a lot of bounds/free checking will only really do it once.
You could wonder how that works in a multithreaded language. What stops you allocating some memory, starting to work with it, the thread checks the memory isn't closed once and then does a lot of operations, but whilst it's doing that, another thread closes it? There's a TOCTOU problem. Their solution is to use the thread-local handshake mechanism built into the JVM, which lets threads send "messages" to each other even if the user's code isn't polling anything for a message. This is built on the safepoint mechanism, which is itself a highly optimized and efficient per-thread check. If a multi-threaded segment is closed then each thread that is accessing it is brought to a safepoint and then de-optimized (forced back to the interpreter from compiled code), at which point the flag will be checked on the next access and throw an exception.
In this way bounds and freed-flag checks can appear as if they are done on every single access, whilst still having high performance, and use-after-free bugs will always yield a safe exception that can be logged, reporting, doesn't corrupt memory etc.
This will be very interesting for not just FFI, but for GC-free allocations for tight loops!
I bet that games could very well be rewritten to use this to allocate memory. Though the layout and varhandles looks tedious to use, but may be it'd be fine.
Java deals pretty well with that case already. Small ephemeral objects have the allocation costs closer to C-style stack-allocation than heap allocation in most cases (due to TLAB etc.)
This has far bigger implications for databases, kv-stores, and similar, which are currently severely hamstrung by only being able to memory map 2 Gb at a time.
> I bet that games could very well be rewritten to use this to allocate memory. Though the layout and varhandles looks tedious to use, but may be it'd be fine.
I mean you could conceivably use it for an ECS system or something like that, scratch allocation, but many of those patterns from the C++ world solve problems that Java largely doesn't have (i.e. heap fragmentation)
> Small ephemeral objects have the allocation costs closer to C-style stack-allocation than heap allocation in most cases (due to TLAB etc.)
so, I have some code with one loop, which reads strings from file and does some computations, when I add single "new SomeSmallObj()" there which I would allocate on stack in C, it starts working 2 times slower. How can I debug/reason/improve this case?..
You haven't provided enough details of your object allocation. It depends entirely on escape analysis and whether the JIT thinks the object escapes. If you're doing a simple non escaping allocation it shouldn't matter:
Common ECS approaches are highly compatible with, and are often combined with, data-oriented design techniques. Data for all instances of a component are commonly stored together in physical memory, enabling efficient memory access for systems which operate over many entities.
Java continues to get immense real-world usage, as it has for decades, so it will continue to make headlines. Meanwhile, each trendy lang will make headlines for a few years as 4 startups gush about it before they move to the next trendy lang.
Waiting on the whole async dead-end to come to terms to cheap threads here. It's going to take a long time, but let's not kid ourselves: the TCO-driven accountants hold the roots of true power.
How does that work exactly, because at some point the JVM must call the native socket/file/etc API’s right? That isn’t any difference than in .NET, is it?
.NET does not need to deal with side effects of threadlocals, do thread pinning or much else - it calls into native code directly like Rust or C++ to its dynamically linked dependencies. The only cost is pinning objects, if you are passing let's say a pointer to a managed array or string (upfront cost is free but you pay for it later on if you pin for a long time), and notifying GC not to preempt the thread while it's in unmanaged land.
I believe Loom virtual threads will become pinned if they call native functions, so besides taking away place from more virtual threads, they won’t have to have much overhead.
This JEP will be in preview in Java 21 and the plan is to finalise it by Java 22. So it will be another ~2 years until it is part of an LTS release.