I’ve always admired Java for managing to slowly but surely move the language forward while maintaining backwards compatibility and keeping the community together. When you look at how many other languages have faltered and lost momentum due to transition issues (Perl 6, Python 2/3), it really shows have difficult of a task it is.
Progress can seem glacially slow at times - it’s always disappointing to see your favorite upcoming feature get pushed to a later release - but the current language is undeniably light years better than what we started with.
I would add C# to the list of languages with good backward compatibility and progression with every new version.
Very similar to Java, and has a lot more feature set with somewhat decent documentation.
One thing I've always admired about C# is that it's an ISO standard. Which I'm not sure if that really means much these days in terms of computer programming languages. It seems like it would be an amazing hedge against a private organization co-opting it in the future.
The catch is that C# standardization lags behind language evolution. At the moment, the most recent edition is ECMA-334:2017 aka ISO/IEC 23270:2018. It corresponds to C# version 5.0, originally released back in 2012 - it was the one that introduced async/await, and it's still missing stuff like null chaining.
Yeah, .NET is an odd choice to praise for backwards compatibility.
.NET 1 -> .NET 2 was iirc a total break due to the integration of generics, amongst other things. Source code had to be rewritten.
.NET Framework -> .NET Core was another total break. Lots of .NET code that worked on Framework didn't run on Core at all and had to be ported to different library or required other changes.
The fact is, the .NET world has made moves that obsoleted lots of prior code several times. Java never has, unless you count the various boulders rolled down the hill at people using JVM internal APIs.
Well, kind of. In practice despite the faster release schedule quite a few features have been in incubation for years. Most features that people could identify as favorite have been talked about for nearly 5 years now and are still not targeted to get a stable API yet.
That is true, the cost of mistakes for a platform used so broadly is high which is why folks working on the platform don't rush things that aren't ready or don't feel natural yet.
That said, the faster release cadence has done more than just cut the "wait time for next release" down. For example, the preview/incubator system has allowed the team to get features into releases without being final (see JEP's 11 and 12, or [1]).
And each of the innovation projects like Loom, Panama, and Valhalla, are able to release features incrementally. In fact, it could be said that something like Valhalla wouldn't even be possible without a faster release cadence.
I could go on and on with anecdotal stories about faster pace of innovation, less energy spent on releases, etc....
The simplest example of why this would be valuable (edit: this was not an intentional pun), the Optional type could become stack based, such that you could ensure that the variable referring to the Optional type is never null. This would help reduce a common class of bugs in Java code stemming from NPE's.
I haven't read up on the details of these but this is a huge step in the right direction if pulled off properly. The dichotomy between primitive and non-primitive types in Java is a real rough spot. I'd have to see how they deal with null values though because the introduction of auto-boxing certainly introduced a new area for NullPointerExceptions to catch the unwary off-guard.
There is extensive discussion of that topic in the valhalla-dev and valhalla-spec-experts mailing lists, but I think the details are still being worked out.
I recall someone in /r/java saying that I should stick with Java for the new project because project loom is just around the corner. The system has been in production for 3 or 4 years by now.
Kotlin's approach of making null act kind of like Optional is pretty nice, but I wish there was a strict option for interacting with Java types -- by default, all Java-native types (type names ending with !) can be null and are not null checked at compile time.
In our code base we just throw a `?:` after any type that shows up as Type! and handle it immediately. If you actually expect a null value you can also use `?: null`
Boolean truthy = new Boolean(false);
truthy = null; // compiles just fine
The reason Boolean (and other boxed primitive types) exist is because it is an Object (a reference type) and that allows them to be used in things like collections that expect Objects and not primitive types.
Looking at https://openjdk.java.net/jeps/401 it's not totally clear to me at the moment how you can use ValueTypes (primitive keyword) in Collections. Might be that you'll need to take a reference to them with Type.ref (but I need to read 401 in more detail to understand that).
> Looking at https://openjdk.java.net/jeps/401 it's not totally clear to me at the moment how you can use ValueTypes (primitive keyword) in Collections.
JEP 401 won't cover it quite yet, note this from the "Non Goals" section:
> An important followup effort, not covered by these JEPs, will enhance the JVM to specialize generic classes and bytecode for different primitive value type layouts.
It wasn't until recently that I understood for myself just how large the surface area of the "value types" problem is - it will be delivered incrementally. JEP 401 and 402 are the first steps (if I were to guess they will show up as previews soon? maybe JDK 17 or 18?), but there is more to come.
That's not exactly the same thing... It can also (again I don't know if this is part of the jep or not) be used to perform stack based destructors (finalize) and then Java could implement RAII. Reading the jep it discusses perhaps rejecting finalize as function on primitive objects, so this might not be in scope.
Additionally, the type is also lighter weight than an Object and should be stack allocated, so there are memory advantages to not having to use boxed types as well.
> How does making Optionals allocated on the stack (value types) prevent this?
Let's not confuse language specification issues with implementation issues. Both value and object types can be allocated on the stack. Both can be allocated on the heap. Both can not be allocated at all and can be scalar replaced. So this isn't the meaningful difference!
I'm looking forward to Valhalla too, but I think it won't be as impactful as people think. In particular that use case is a poor fit for Valhalla and I don't think Optional will be widely adopted even post-value types. The intent here is to mark which parameters and return types can be null. A laudable goal. However:
1. You will still be using many, many libraries that pre-date cheap Optional. For those, the lack of Optional will NOT indicate the value is always present, it will mean "no optionality information available". Therefore by looking at an API you can't actually tell what the lack of Optional means: it could be deliberate, or it could be legacy. This will make the signal useless.
2. Optional is a very verbose, heavyweight syntax for handling optionality. Therefore some developers will not use it on the grounds that Java is already a very verbose language and it really doesn't need more. This will cause further divergence.
3. Many developers will want to preserve compatibility with old JVMs. Even though Optional is around a long time now, guaranteed cheap/fast Optional will take a long time to roll out and may never make it to Android at all. This will cause some library developers writing high performance code to shun it because null is as cheap as you can possibly get, and universally supported. Again, the lack of Optional will be meaningless.
4. The killer blow: adding Optional changes your function signatures and therefore breaks both binary and source compatibility. Many library developers want to keep a backwards compatible API, or are required to, and that includes for example the Java standard libraries. Therefore lack of Optional will NEVER mean "guaranteed to be present" in most Java code because the standard library won't use it that way.
There is a far, far better way to handle this and it's what Kotlin did:
1. Integrate optionality into the language using lightweight syntax and type inference.
2. Use non-denotable flex-types at the edges, so missing optionality information is added transparently in the source code the first time a developer specifies a type explicitly.
3. Allow annotations to provide nullability information, preserving source and binary compatibility. Integrate it with IDE and compiler data flow analysis. IntelliJ already does this and you can make nullability violations hard errors today by adjusting your project preferences.
4. Finally, allow third party libraries that aren't being changed to be annotated using external annotation files.
It has Value Classes which are essentially the same thing. They transpile down into the value itself but wrap the value in extra functionality if you so choose.
They can only wrap a single primitive type - it’s a nice feature but nothing extraordinary. When Java implements primitive types, Scala will probably use this same construct. But the change have to come from the runtime level.
I am unreasonably excited by Loom, for two reasons:
1. I cannot wrap my head around library-based reactive systems. I have tried and tried and continue to try. But they're like some of the original Go4 design patterns: they exist to solve the language, not the problem. Loom's promise to make steam-powered linear code behave mostly like a fully-dressed reactive library system is extremely welcome.
2. Structured concurrency. Among the entries on the thick tablet of hatreds of Golang that I carry upon my heart, mystery action at a distance due to go(to) routines appears infrequently but painfully. Like so many Go features they conspire against composability and testability. Structured concurrency looks like an escape from the madness. Please let it be true.
What you're waiting for is already available with Kotlin coroutines (which has a very clean API btw)
And no reactive streams still are useful for processing collection streams asynchronously and here again kotlin solve it all https://kotlinlang.org/docs/flow.html
I like Kotlin coroutines and I've used them, but they (this argument is controversial) have a function colouring problem. As the JVM adds virtual threads I expect that a rebasing of Kotlin's coroutines will remove that objection.
Current reactive code is littered with async/await for every IO function, which feels like they are trying to bolt reactive behavior. Library-based threading model provide the sync thinking with optimized async compute model.
Not exactly. That's the programming rule for structured concurrency which is a very simple bit of logic added on to the ExecutorService class. Loom itself is just about making threads super cheap/fast. You can use new cheap threads in exactly the same way as normal and nothing will complain. You can also use structured concurrency with old threads, and again, nothing will complain. And you don't need Java support for structured concurrency. You could implement it today with some wrapper classes.
The discussion further down about "scope variables" foreshadows making structured concurrency more strongly, directly supported by the language's runtime:
> What about inheritance? Because SVs are immutable, and because structured concurrency also gives us a syntax-confined thread lifetime, SV inheritance fits structured concurrency like a glove
Worth keeping in mind that the implementation they have chosen of stack copying from the heap has a larger performance overhead than something like Go which was designed around green threads/fibers/virtual threads/ etc. from the start.
If you can keep your stack size very low in each task then this probably won’t be noticeable if you are doing lots of IO which can trigger wakeups. More of a replacement for an async/await state machine for handling a request than something like a goroutine you could use for everything.
I have a feeling lots of existing Java libraries and frameworks won’t work well with it. Will be interesting to see how it plays out.
The technical details are way out of my league, but I remember reading that stack copying can be made really cheap somehow? Perhaps due to the JVM having more abstraction below a virtual thread than Go?
Many virtual threads won’t have large stacks, and many others will not substantially grow or shrink their stacks between yields. You only need to copy stack frames from the heap as they are needed (so as the stack unwinds) and you only need to copy new stack frames to the heap when a thread yields. With a few other clever tricks this can really reduce the data that has to be copied.
We can do these tricks because we know lots of properties of the Java stack, and they give a different set of trade offs to Go’s approach.
Specifically, the JVM knows there are no pointers into the stack because Java and bytecode cannot express:
int a = 1;
int *b = &a;
Or rather, this construct can occur, but it's always the decision of the JIT compiler to emit such code and it is cooperating with the rest of the runtime.
Oh yes. Did you ever have a statement where you chain more than two method calls and don't know which returns null? I tend to split calls like this on separate lines just to be able to tell which call returns null from the line number in the stack trace.
obj.call1().call2().call3();
becomes
obj.call1()
.call2()
.call3();
And if I now get an NPE at line three, I know that call2() returned null.
Not sure what it was about those versions particularly, but I kinda lost track of the language and runtime after Java 8 and I’ve spoken to a lot of other people who did as well.
Admittedly it’s around the time I stopped working in Java a lot, but I guess a lot of others did as well?
The JVM will be getting state of the art, fine grained generics specialization, riefied generics and even Constant generics (you know that big feature that make a lot of noise in the Rust world)
https://www.reddit.com/r/java/comments/m2dfb5/parametric_jvm...
(note that Kotlin already has riefied generics to some extent and that Scala support explicit Specialization (and Kt too through overloading))
I was programming in Java before generics came out. While generics were a very big upgrade, it was such a disastrous mistake to go for type erasure, because Java has a bifurcated type system. As we all well know, you can't be generic over a primitive type, for exactly this reason. I knew then and have minefield-of-rakes stumbled my way through every single consequence of that decision and still think it was wrong. That decision was made, approximately, because it was a.) too hard to retrofit the existing libraries without erasure and wildcards, and b.) the VM developers staunchly resisted extending class files.
I've been away from Java for 7-8 years, and returning to use lambdas now. At first it feels like a pleasant experience, and it was a neat trick to allow single-method interfaces to basically denote function types. But that early mistake of not allowing generics over primitives hits back again...LongFunction, IntFunction, oi vey. Not adding a syntax for function types, but relying on single-method interfaces seems like a simple, clever hack. But I think it will turn out to be one of those broken-generics class mistakes.
I spent 15 years working on Virgil [http://github.com/titzer/virgil]. Having tuples and real generics is really important to allow a language to be fully combinatorially complete. I wrote a paper about it. Nobody read it. Ah well. Dang it.
Firstly, Java's generics did extend the class file format, quite significantly. Lots of metadata about generics and type variables makes it into the class files which is why you can reflect over them. It requires a gross trick involving creating anonymous objects that subclass a TypeHolder or TypeToken style class but you can do it because the data is there.
What they didn't want to do was break existing code and tie the JVM too deeply to their specific choice of generics. The downside is sometimes you can't do stuff you want to do, like overload a method in ways that only differ by a type variable. The upside is that there are still tons of libraries out there that are useful which at least internally have pre-generics code: backwards compatibility has real value. Also, other languages like Scala and Kotlin were able to try out different approaches to generics. Kotlin in particular improves on Java's existing generics and that's possible partly due to erasure.
Also, truly reified generics is really hard. Look at Valhalla and the "parametric JVM" documents. Doing it well involves a lot of complex design choices and support infrastructure. Most languages struggle with this. For instance it's a big part of why C++ libraries find it hard to export stable ABIs.
All of what your saying doesn't actually contradict what I wrote. The metadata for generics is not used by the VM during execution. It only exists so that you can compile against .class files as if they were source.
IMHO it was still a major design mistake that has cost everyone else more time in the end. A classic near-term/long-term miscalculation. For a counterpoint, C# introduced generics and rewrote the class library. They're in the bytecode, not just metadata. The VM dynamically specializes JIT code as necessary, and shares equivalent specializations to avoid code explosion.
> Also, truly reified generics is really hard.
I think this statement is false. It's not intrinsically hard; it's only hard because of a lot of other constraints. For example, Virgil has "reified"[1] generics since 2.0 and I just use monomorphization. MLton did the same, 20 years back. Code explosion isn't that bad for medium-sized programs.
[1] "reified" implies there is a runtime representation, which is not really what's going on. With static specialization, it's possible to completely compile away any additional metadata representation, as it either becomes implicitly part of a function specialization or the class metadata for specialized classes.
The only real reason I see to not use monomorphization is if you have polymorphic recursion, or first-class polymorphism. Virgil doesn't allow either of those.
And it is also a big part why all Microsoft attempts to have a full AOT story in .NET have not enjoyed much love.
NGEN, introduced since version 1.0, never went beyond providing a faster startup and drop back into JIT for more hard to AOT compile stuff.
MDIL taken from Singularity into Windows 8/8.1, adopted a mixed IL/native code binaries that were linked at installation time.
.NET Native is what .NET 1.0 should have been all along, but again it imposes some restrictions to regular .NET code so, and it required COM (now WinRT) to be extended to some some kind of lightweight generics ABI. Most likely will be killed when .NET 6 AOT comes out, and the whole Reunion project to bring core UWP stuff into regular Win32 stack.
Mono AOT and IL2CPP mostly work by mapping .NET generics into C++ template semantics, which work most of the time, but with caveats when taking any random .NET library not written with them in mind.
So while .NET generics model was a much better approach in general, it isn't without its own share of downsides.
As someone who began programming in undergrad, Java was the university's language of choice for the "introduction to programming" course. My first internship was in Java SE 6 and part of the big debate for that summer was when to move to 7.
I haven't used it since but I'm glad to see it's gotten the more modern language features that make writing and reading code much easier. Without this list, I probably would never have learned this so thanks
After a long stint with Scala, I recently came back to Java because I'm setting up a team for a greenfield project. Java has a good balance for ease of use, type safety, massive+mature ecosystem, and talent pool.
The new language features (post v5) and framework maturity help reduce the "bloated" feeling that made me switch to Scala. Still not happy with the semicolon.
What do you mean by "JVM-aligned" and how is Kotlin more JVM-aligned than Scala? Speaking as someone who uses Scala on a daily basis and never touched Kotlin.
Kotlin types and Java types are much more compatible than Scala types. No need to use converters. Also, Scala leans very FP, while Kotlin and Java don't, so they are also more similar in that regard, though I don't know if that's what "JVM-aligned" means here. Overall, Kotlin and Java just have better interoperability than Scala and Java
Kotlin sold its soul to Mountain View overloards, while JetBrains it trying to create a Kotlin platform of its own.
Now it is full of @JvmSomething annotations and one needs to rely on KMM for writing portable code across JVM and ART.
Thanks to #KotlinFirst and Google's relutance to improve Android Java, Kotlin will have to choose which eco-system they want to provide a platform first developer experience without forcing devs to write FFI annotations and wrapper libraries.
The JVM is designed to execute the constructs of the Java language. Scala compiles to JVM bytecodes, but it does a lot of (very cool!) things that are not natively supported by the JVM, and thus tend to be pretty expensive. Scala's pattern matching, for example, goes beyond what can be efficiently expressed in the JVM.
As a rule, Kotlin does not take this approach. Instead, almost all the constructs in Kotlin can be cleanly expressed in JVM bytecodes.
Reified generics is a good example. Kotlin only allows them when a function can be inlined, since in those situations, you don't actually need generic arguments, since the code in the inlinable function can be dropped in place. This allows for the convenience of reified generics -- but only if you're willing to live within the limitations of the inline restriction. (And it's also elegant, since it'll be easy to relax the constraint once the JVM adds the needed support.)
I've run across one or two things in Kotlin that don't translate to JVM bytecode elegantly, although I'm struggling to find a good example off the cuff. They're definitely the exception, though, whereas the Scala team took a more radical approach.
It's not clear that bytecode changes are needed for efficient pattern matching support.
Java is getting pattern matching support. The related JEPs mention that:
The implementation will likely make use of dynamic constants
https://openjdk.java.net/jeps/309
I strongly suspect that Scala could be more efficient if it used it too. However I have no idea if this could improve compile time. (kotlin has until now procrastinated implementing pattern matching support because they say it increase compile time, but maybe that doesn't hold with constant dynamics)
All the backend projects I work on were originally written in Java. I'll admit that Streams and Lombok make it a somewhat pleasant experience to work with, at least compared to when I used it in college (I think Java 8 was newest?).
But that said, I once tried writing a new feature in Kotlin, and now all our code is Kotlin. I don't really see how Java can compete with the non-nullable types, data classes, and type-inferencing Kotlin gives you, with easy interoperability, to boot! (Although you might start regretting your use of Lombok at that point...)
I write a lot of Kotlin code but still tend to use Java a lot. All of the features you mention are not really an advantage for Kotlin anymore.
Java has had tools to deal with nulls for a long time, we tend to assume everything is non-null unless annotated with `@Nullable` and let tools check correct usage at compile time... data classes are very similar to Java 16's records. Type-inference since Java introduced `var` is pretty much on-par with Kotlin.
Some people have already started talking seriously about Kotlin now just adding friction, as you need to keep updating its version and the Gradle plugin (which is not compatible with some versions of Gradle if you're stuck with some old Gradle plugins). Java 17 should probably stabilize sealed classes, which is one of the biggest advantages of Kotlin at the moment... to be honest, even though I've always been a proponent of Kotlin, I am not nearly as enthusiastic now as I used to be about it.
If I remember correctly, it's fairly easy to use @Nullable and still end up with nulls there, even without an IDE warning. I'll have to double check to see if I can still reproduce that.
Value classes cannot provide the performance/memory flatness that Valhalla will bring though. So I'm not really sure what benefit does "values classes" bring. Just to make sure, you are talking about the inline class which is just a wrapper around single property?
"value class" is the next step for inline classes. They are going to behave very much like Swift's struct and its "mutating methods". And they are going to be "Valhalla ready" so we'll get the performance improvements automatically when it lands.
In a sense, they are an upgraded data class with a much more precise and controlled form of mutation. Rather than a `var` field meaning the object will mutate in-place, it will basically automatically call a kind of (implicit) "copy()" method. Further, you may only mutate these `var` fields inside of class methods that are marked with a new keyword: `mutating`.
Read through the KEEP. It's pretty interesting stuff. At first it kind of rubbed me the wrong way, but the more I thought about it, the more excited I got about it- especially after thinking about how Swift does it and it works well there.
Well Checker framework to check nulls will limit you to Java 11 or earlier, at least until sometime after Java 17 lands. Also it is a constant pain having it guess wrongly about map value nullability based on key provenance which its not really smart enough to track, and it gets some Java api nullabilities wrong which need correcting here or there at annoying times. Better than nothing but nowhere near as good as proper language support IMO.
We don't use Checker, we tried it and it was too buggy/complex. We use IntelliJ itself to do the checks (so they don't run on every compilation, but they do run every time you change something) - you can configure it to treat null issues as errors instad of warnings. This is of course not 100% but it doesn't need to be, tests and code-review tend to catch the remaining places where we forgot to check for null.
NullPointerException is really rare in our codebase which is a few million lines of code (something like 80% Java, 20% Kotlin), so I wouldn't call it a major issue or even a minor issue.
At work, everyone has the choice to write code in either Java or Kotlin and most people, most of the time, stick with Java, so the percentage of Kotlin code is not increasing, it's mostly stable lately.
> Java has had tools to deal with nulls for a long time, we tend to assume everything is non-null unless annotated with `@Nullable` and let tools check correct usage at compile time...
Oh yeah, nothing like creating Frankenstein's monster instead of using a tool that was designed for the purpose. And no, it's not the same. You'll never beat Kotlin compiler , no matter how many annotation you use - you will always miss something.
> data classes are very similar to Java 16's records.
You mean Java 16's records are similar to data classes, since those were first.
> Type-inference since Java introduced `var` is pretty much on-par with Kotlin.
Not even close. Can't use it outside of functions, clunky final var instead of val.
> Some people have already started talking seriously about Kotlin now just adding friction, as you need to keep updating its version and the Gradle plugin (which is not compatible with some versions of Gradle if you're stuck with some old Gradle plugins).
My favorite kind of people. 'Some'.
> Java 17 should probably stabilize sealed classes, which is one of the biggest advantages of Kotlin at the moment...
Nice!
Let's wait until you'll upgrade to Java 17 while Kotlin has everything already and much more with compilation to JS and native in the future.
You can only assume non-null for your own code though. The java ecosystem of 3rd-party libraries will give you nulls, and most of the time won't include nullability annotations.
var is a nice step, but encourages mutability since it's not final by default.
Those features (except for nullability types) predate Java, let alone Scala and Kotlin, by a couple of decades. Those three -- like all programming languages -- get inspiration from similar sources. Java is just the most conservative of the three with regards to language features.
I think it's disingenuous to deflect by saying that Scala didn't invent those concepts.
I think it's pretty clearly significant that Scala and Kotlin are JVM languages, specifically.
If we just said "Well, StandardML had XYZ feature 30 years ago," you could reasonably wonder whether implementing those features on the JVM was a legitimate hurdle to not including them.
But the fact that the guy who wrote generics for Java then went and created Scala with these features is telling.
I also don't think that "Java is conservative" is the correct description of what's going on here. Look at the OP (I know you're a Java guy, so I don't mean that literally)- Java used to get maybe one language change for each major version until 16 where it got a bunch.
Why? I don't know. But I think it's totally fair to observe two correlations in recent history:
1. Oracle obtained control of Java
2. Many other, very good, programming languages have been coming on the scene: Swift, Go, Rust, Kotlin, TypeScript, C#, F#. Even older programming languages have seen major advances in the last 5-10 years: C++ and PHP among them. I think it's pretty clear that Java just didn't feel the pressure to make itself better until recently.
You're right about your main points: for some years before Sun's demise Java was neglected and lacked resources, and Oracle has gradually but significantly increased its investment in the platform. It is also true that the abnormality of ~2000-2005, when there were few popular languages, has ended. But the following two facts are also true: Scala didn't invent those features, and neither Kotlin nor Scala are big enough to be serious competitors for Java. So Java might be feeling the heat of competition, but not from those two, and, like all other languages, it looks around and borrows features, but not from those two more than others.
It is possible that Java platform languages might indicate the palatability of certain features to Java programmers more than non-Java platform languages, and the Java language designers do have a closer relationship with the designers of other Java platform languages. But overall, these effects are not particularly big. Knowing the current designers of the language, I can say with certainty that Haskell has served as a bigger inspiration than either Kotlin or Scala on recent features and their design (inspiration, not direct source).
Java gets a bad rap from people that used it late 90's through early 2000's and got burned out by XML and design pattern heavy frameworks but its a lovely language that with a little discipline can be used to create very lean looking code.
Go is one of the HN darling languages and I work in Go everyday for work (and generally like it), but I really wish I could reach for Java most days.
> More seriously, what are you missing in Go that is well-done in Java?
1. Generics. And yea, I know Go is getting generics "Real Soon Now" (tm), but it is incredibly annoying to write the same collection code over and over and there's some third-party libs that would really benefit from generics (looking at you Azure Go SDK).
2. Error handling... with the big caveat that I actually like Go's error handling mechanism at small scale but wish there was a good way to chain several operations together and return an error to the top if any failed... I find myself writing a lot of `err != nil` checks in sequence and I've found baking my own abstractions around this to be leaky or difficult to grow as the program requirements change.
3. Diverse collections API.
4. Iterators.
5. Not Java-the-language but the JVM has amazing monitoring tools.
> I assume verbosity of the code is still the defining characteristic of Java?
Pound for pound... I think Go and Java have about the same verbosity. I'm honestly never quite sure what people mean by "verbosity" in Java. Generally I interpret this as "frameworks" but I predicated my OP on the idea that legacy framework bloat is where most of people's frustration with Java lies... not the language itself.
> I'm honestly never quite sure what people mean by "verbosity" in Java.
Java improved with "var" keyword (use with caution!) and introduction of records. These are not the only code-shortening features (there are e.g. interface methods, diamond operator, lambdas, convenience "of(...)" methods, even "fluid style"), but they, used well, can really reduce verbosity.
Also, a lot of verbosity in Java came from people going off the rails with design patterns to hide crap multiple levels deep in a file with 5+ words in the class name.
Much of the verbosity came from the false presumption that every field in every class needs a public getter and setter. (I cannot express in words how terrible this is.) Some can come from the framework, mostly poorly-designed ones.
I think Java 11 and higher are reasonably terse/expressive without being overly dense.
Remind me, why doesn’t Java have object-properties yet?
So far the only reason I’ve concluded is Java’s language designers’ egos were so damaged by C#, Swift, Kotlin, TypeScript, etc that after dogmatically denying the developer-productivity benefits of properties for the past 20 years that to concede now would mark the end of Java as a language entirely.
...I kid, but seriously I haven’t heard any compelling argument from the Java camp yet for refusing to add this feature. Object-properties are easily the lowest-hanging-fruit with the biggest productivity-gains.
Heck, even some of the most popular C++ compilers support object-properties as proprietary extensions.
It's so crazy - with all the huge and radical features they've added this would surely have been so simple by comparison and yet added more value than about 80% of the things they've done. It's about half the reason I still use Groovy now since Java has solved most of the rest over time. But I just cannot go back to writing getters and setters!
Couldn’t agree more, the madness of idiomatically adding setters and getters to every field was something I ended up putting an end to in my last gig, and my life improved immeasurably as a result.
I think I'm going to convince my team to get to 16 (with records) before I can convince them to stop adding getters and setters everywhere, but just out of curiosity... how did you manage to make this happen?
Well - I was the boss :) so that helped. But I’d also taken over a fairly unhappy and unproductive team that was ready to try new stuff. I’d been out of the game for a few years and had some fresh perspectives too.
So I did some small standalone projects and demonstrated how much easier life was with this and other (more significant) changes. For example, we had loads of operational problems with dependency injection - the usual cognitive and debugging issues - so I threw all of that out too, along with the frameworks that implemented them. Things started booting in a few seconds instead of minutes and I’d say the lack of setters/getters was mostly done because I built trust in my team that I was making their lives better so they just followed me.
Probably not the answer you were looking for but there was friction from the devs and in the end I just showed them how much nicer life could be...
I agree with your points here, but just want to add that, while Java's generics are better than the non-existent generics in Go, I still find Java's type system to be really subpar for today's era.
The type system is not strong or expressive enough to do full type erasure, but then we're stuck with type-erased generics, so it's just incredibly frustrating.
Sorry that I find it necessary for yet another answer to your verbosity question, but: The real verbosity is in the standard libraries. What would be a one-liner in any other language is usually at least 2, often 3, and sometimes enough that you end up writing your own wrapper function or library. Especially noticeable is the agony of Lists & Maps, which are first-class citizens in most languages (classic job interview question: what is the difference between ArrayList and LinkedList? Real answer: Nobody actually uses LinkedList).
The stdlib often seems to be written by people who had no intention of using it. I guess maybe this applies to C++ as well, so for some folks it seems normal.
About verbosity: you needed to define classes for so many things. Eg you want others to hook into processing in certain places. So you create a listener interface with some methods. And the provider needed to work with those, and the consumer also needs to work with them.
In modern Java maybe you can make do with functional interfaces and lambda, but it doesn’t always work I think.
And in emacs: the provider does run-hooks, and the consumer defines a function and says add-hook. Done. Very little ceremony.
Now we've got Spring, which in my, admittedly limited, experience does a great job of transforming what could have been compile-time errors into run-time errors.
Spring native is a game changer for so many organizations using containers for startup times. There are other spring related issues you still have to deal with
Well said :) . Spring looks so much as to show how useful ideas can be misapplied with catastrophic results. For example, liberate encouraging of dependency injections leads to multitude of interfaces which are only ever implemented once by a production code class, and maybe one more time by a test class, even though Java has all methods virtual and testing could be done without requiring the interface.
It's reasonable to provide a single implementation of an interface, if the goal is to facilitate dependency injection for that component. The problem I find with Spring is that it is designed for dependency injection at all levels of its architecture, leading it to be one of the ultimate examples of Ravioli Code.
DI is a powerful concept, but Spring projects rely on DI in such a generic way that it often doesn't even make sense for your application. You have to gain intimate knowledge of the abstractions, and inject a bunch of code in a bunch of places just to make it do the very-straightforward thing you were trying to do.
> It's reasonable to provide a single implementation of an interface, if the goal is to facilitate dependency injection for that component.
You don't really need dependency injection if there is only single choice of what to inject. You can just refer in code to the only possible component.
>> Java has all methods virtual and testing could be done without requiring the interface.
Not sure what you mean by that. I thought that interface/implementation divide is the way to implement virtual calls. Unless you want testing frameworks to do bytecode instrumentation.
If your interface is going to be implemented by just one class, I'd offer to skip interface - it reduces amount of code. If you still need to pass, as a parameter, a class A in order to test a class B, you can instead pass class C inherited from A, with all necessary methods overloaded.
If you just create a class A in java, no interfaces involved, class A will have its methods virtual. I'm not sure what do you mean by interface/implementation divide.
Running ‘mvn install’ on even a simple Spring project with default dependencies is scary to watch on the console. So many things happen and it’s not even the verbose mode.
Running ‘dotnet build’ is much more saner and one can reasonably understand what happens.
> More seriously, what are you missing in Go that is well-done in Java?
As cliche as it is to say, Go missing generics (for the time being) does hold it back in many ways relative to Java. I like and use both languages regularly, and as I thought about my answer to this question I realized that essentially all of my complaints stem from the lack of generics - things like streams, a rich collection framework, and non-channel concurrency features (java.util.concurrent among them) don't exist in go because you can build them as generically as needed for them to be useful.
You might have said "well-done" as a way of excluding generics in Java since many people like to suggest they're not well done; of course most developers would like more, but they already enable an enormous amount of stuff not possible in Go.
The java ecosystem is very nice. Not necessarily the language itself, but everything else. The jvm, the tools, the libraries are all very mature and good.
Skipping java and instead using Kotlin allows one to reuse all that knowledge in a great language as well.
I guess it depends upon what you mean by ecosystem.
Java the language is ok. The culture (which is part of the ecosystem) and how you're pushed to write code is the biggest problem with Java. The wide array of tooling, frameworks, and libraries within the ecosystem is nice.
But the way you have to use them tends to be shit due to the culture surrounding the language. At least it seems to be shifting to something more sane.
Good language ought to discourage (enough) the bad practices. Culture forms slowly, and it's Java fault that the culture managed to produce such excessities as proverbial FactoryFactoryFactory.
Edit: I still think Java is a good language, especially later versions, and both original goals and recent advances are quite noble.
I somewhat agree, but Java grew up in a different era when communication about these things wasn't as easy. It's hard to change the direction of something as large and widely deployed as Java. It's happening but it will take time and its always going to dealing with its legacy as there just so much of it out there.
Huh, one needs to see first party Java libraries from earlier times. It is pretty clear engineers at Sun also believed that AbstractFactoryFactory everything will be the way world need to be rebuilt.
Of course one can show empathy and understand justifications for things they like and simply laugh out "LOL Go No Generics' when they don't.
I at the same time laughed and got nauseated just by looking at that FizzBuzzEnterprise code LOL. I'm a minimalistic programmer myself, so the thing I hate the most in coding is over engineered code. Yes , it's a joke, but a joke based on real life haha.
I disagree that it's a lovely language. I think, as developers, we very quickly develop Stockholm syndrome. Once you "learn" a language, it's really easy to churn out code and apply idioms without even realizing that you're constantly writing workarounds and kludges for your language's deficiencies.
As a polyglot dev, the following are my gripes with Java:
* null - we all know, so I'm not going to bother expanding except to say that @NotNull is NOT a solution and it doesn't guarantee shit.
* interfaces are extremely lacking compared to type classes and lead to verbose and cumbersome patterns such as "Adapter".
* Type-erased generics. Why shouldn't I be able to implement a generic interface for multiple types? E.g., class MyNumber implements Comparable<Integer>, Comparable<Short>, Comparable<Long> {}
* It only just got Records and sealed interfaces, so thank goodness for that. But prior versions of Java are extremely lacking in expressiveness without these.
* I don't hate checked exceptions as a concept, but the fact that you can't be "generic" over the exceptions in a function signature is frustrating. This is why everyone says they're "incompatible" with Java's closure APIs.
* No unsigned ints.
* Silent integer overflow/wrap-around. It's not C- did it really have to copy this insanity?
* Dates and timezones are still insane, even in Java 8+.
* The fact that arrays got type variance wrong.
* JDBC sucks. JPA also kind of sucks.
* No concept of `const` or immutability.
I'm not saying that Java is the worst language in the world or anything, but it's far from great, IMO. Most programming languages are pretty awful, IMO.
I think that's a mixed blessing. I believe Java did this deliberately to avoid the trouble that C and C++ have with signed and unsigned integer types having to coexist. Personally I've never been inconvenienced by Java's lack of unsigned integer types, but I'm sure it can be annoying in some situations.
I'm quite fond of Ada's approach to integer types, but I suspect I'm in a minority.
> Silent integer overflow/wrap-around. It's not C- did it really have to copy this insanity?
Curiously this cropped up 10 days ago. [0] You're not alone. The great John Regehr put it thus: [1]
> Java-style wrapping integers should never be the default, this is arguably even worse than C and C++’s UB-on-overflow which at least permits an implementation to trap.
> The fact that arrays got type variance wrong.
At least Java has the defence that they didn't know how it would pan out. C# has no such excuse in copying Java.
> No concept of `const` or immutability.
I recall a Java wizard commenting that although a const system is the sort of feature that aligns with Java's philosophy, it's just too difficult to retrofit it.
> For me as a language designer, which I don't really count myself as these days, what "simple" really ended up meaning was could I expect J. Random Developer to hold the spec in his head. That definition says that, for instance, Java isn't -- and in fact a lot of these languages end up with a lot of corner cases, things that nobody really understands. Quiz any C developer about unsigned, and pretty soon you discover that almost no C developers actually understand what goes on with unsigned, what unsigned arithmetic is. Things like that made C complex. The language part of Java is, I think, pretty simple. The libraries you have to look up.
Since Java 8, the standard library has unsigned manipulation arithmetic classes, though.
I don't know about Ada, but I enjoy Rust's strictness when it comes to numeric types.
> Java-style wrapping integers should never be the default, this is arguably even worse than C and C++’s UB-on-overflow which at least permits an implementation to trap.
EXACTLY. It's f-ing stupid. C's excuse was compilers doing magic on UB or whatever. Java has no such excuse. They just wanted it to behave the same as C/C++ to attract C++ devs.
> At least Java has the defence that they didn't know how it would pan out. C# has no such excuse in copying Java.
My understanding was that they DID know it was wrong and chose to do it anyway because it was more convenient and ergonomic to allow it that way. I guess they realized that was a terrible idea, because the generic collection interfaces do it correctly.
I don't see how const and immutability align with Java's original philosophy of being object-oriented, which is all about opaque objects that control internal mutable state. The very fact that it's taken until now to have records is proof-positive that "everything is an object" was taken pretty literally for most of its life. Immutable data doesn't really jive with that.
> I don't see how const and immutability align with Java's original philosophy of being object-oriented, which is all about opaque objects that control internal mutable state.
That's an interesting point, but an object presents an interface and promises to deliver some particular behaviour. A const system is a way of letting the type-system formalise some of an object's promises, no?
I don't think this is particularly 'leaky' (in the sense of leaky abstractions). Java's String class doesn't let me access its internal character array, but it still matters to me that it promises never to mutate it, nor to let anyone else mutate it (at least ignoring reflection). That's relevant at the level of the interface, not only at the level of the implementation.
I get what you're saying and I don't really disagree with you. An object's methods are an interface and its method signatures are a contract about what "messages" (in Alan Kay parlance) it will accept and return.
A C++ style const system would seem to be compatible with that.
And, in every practical sense, I would love such a thing existing in Java. I don't give a crap about whatever "OOP philosophy" and purity, even if my statement were correct/true.
However, (and this is just navel-gazing, honestly), adding const to object methods is exposing information about its internal state. That's not very "objecty" in the Alan-Kay-ish, Actor-model-ish sense. An object's internal state is "none of your business."
> Java's String class doesn't let me access its internal character array, but it still matters to me that it promises never to mutate it, nor to let anyone else mutate it (at least ignoring reflection).
I feel like this is a little different. Strings in Java are technically a class, but they're really treated like primitives (evidenced by the fact that literals are magically made into String objects).
But, it doesn't really matter. I agree. It's great that String promises to be immutable.
I'd argue that immutable class instances aren't really "objects" anymore- they're just (possibly opaque) data types.
> An object's internal state is "none of your business."
An object's state is my business, as immutable objects can be used in ways that mutable ones cannot. They can be passed to arbitrary functions with no need for defensive copying. They can also be useful in concurrent programming. None of that means breaching the separation of interface and implementation.
> Strings in Java are technically a class, but they're really treated like primitives (evidenced by the fact that literals are magically made into String objects).
Immutable objects can generally be treated as values, that's their charm. There's a good talk on this topic, The Value of Values. [0]
> immutable class instances aren't really "objects" anymore- they're just (possibly opaque) data types
They're certainly still objects. The essence of object-orientation is in dynamic dispatch, not in stateful programming.
> An object's state is my business, as immutable objects can be used in ways that mutable ones cannot. They can be passed to arbitrary functions with no need for defensive copying. They can also be useful in concurrent programming. None of that means breaching the separation of interface and implementation.
I'm not advocating for object oriented programming. What I'm saying is that if you "buy in" to the actual, abstract, concept of object oriented programming, then the internal structure or state of the object you're communicating with is, by definition, out of your control. Of course, in practice, you know that sending a "+ 3" message to the object "Integer(2)" is always going to return the same result, but you have no idea if the Integer(2) object you're talking to is logging, writing to a database, tweeting, or anything else. And in "true" OOP, you're not supposed to know- you just take your Integer(5) response message and go on your way. When I say "true OOP" I'm thinking about something like Smalltalk or an Actor framework/language.
I'm not talking about anything practical here. Just the "pure" concepts. Obviously, Java has made pragmatic choices to allow escape hatches from "true" OOP in a few places: unboxed primitives, static methods, and a handful of other things, probably.
So it's just very un-Smalltalk-like for an object's API/protocol/contract to make any kind of reference or promise about its internal state at all. That is implementation in a pure OO sense.
> if you "buy in" to the actual, abstract, concept of object oriented programming, then the internal structure or state of the object you're communicating with is, by definition, out of your control
That's not specific to OOP though, it's a very general concept in programming.
A program is generally decomposed into smaller units which make some promise about how they will behave, hiding their internal workings from the programmer who makes use of them. This is just as true for C/Forth/Haskell as for Python/Java/Smalltalk, depending on how a program is designed.
> you have no idea if the Integer(2) object you're talking to is logging, writing to a database, tweeting, or anything else. And in "true" OOP, you're not supposed to know- you just take your Integer(5) response message and go on your way
Right, you're meant to interact with an object in such a way that you rely only on the documented behaviour that the object promises to provide, you aren't meant to rely on knowledge of its internals. Objects are also a good way of cleanly separating concerns, and then composing the solutions.
On further thought I got it wrong earlier. You're right that internal state isn't my business, but immutability isn't about internal state.
Whether String (or some other class) is mutable or not isn't an implementation detail, it's an important property of the public interface offered by the class, and it's only a property of the public interface. I don't care whether my JVM implements String in Java or in assembly code, neither do I care if it's immutable internally, but I do care that the implementation satisfies the advertised behaviour of the class, and String promises to be (that is, to appear) immutable.
The internal implementation is required to meet the constraints imposed by the class's public interface, and in the case of String, those constraints include that the class must appear immutable to the user, even under concurrent workloads. In principle the implementation is permitted to have mutable internal state, provided the object always appears immutable to the user.
Similarly, whether a class is thread-safe, is a public-facing attribute of the class. The class can implement thread-safety any way it wants.
> EXACTLY. It's f-ing stupid. C's excuse was compilers doing magic on UB or whatever. Java has no such excuse. They just wanted it to behave the same as C/C++ to attract C++ devs.
But... as you yourself are saying, Java's behavior is not "the same as C/C++". Java wraps while in C and C++ signed overflow is undefined. (Interestingly, C++ is now moving away from UB for this, and defining wrapping semantics. While I'm not one for proof by authority, it looks like some very well-informed people disagree with you about the usefulness of this feature.)
Signed integer overflow checking can be almost free. Until it isn't, because it doesn't play nicely with SIMD code. So the code you want to run fastest will pay the biggest price. This article is from 2016 so take it with a grain of salt, but it looks like this can cause 20% to 40% slowdowns: https://blog.regehr.org/archives/1384
I understand that there are performance implications.
But a 20% to 40% slowdown for number crunching in a language that is primarily designed for writing super indirection-heavy, heap-allocation-heavy, application architectures is just nothing.
Having some kind of high performance math section of the standard library would be fine. But the default behavior is, frankly, dangerous. And for a 20% speed up on operations that are probably far less than 1% of the typical Java application?
> a language that is primarily designed for writing super indirection-heavy, heap-allocation-heavy, application architectures
Are there Java design documents that describe the language in these terms, as opposed to something like "a general-purpose object oriented language"?
> But the default behavior is, frankly, dangerous.
You keep saying variations of this, but you haven't really made the case.
True, if you increment a number, you will typically expect the result to be greater. But how many application domains are there where 2^32 - 1 is really the exact upper limit of the range of valid values? I would think that in most cases catching a overflow would come much too late, because the actual error is exceeding some application-specific limit rather than the artificial limit of the range of int. Or put differently, I bet 99.9% of ArrayIndexOutOfBounds errors are because indices leave their legal range without ever overflowing int.
> Are there Java design documents that describe the language in these terms, as opposed to something like "a general-purpose object oriented language"?
I'm sure there aren't. And truthfully, I understand that Java was supposed to be efficient enough to run on small devices and whatnot. But if you look at the evolution of the language as well as where it's mostly used in recent history (no more web browser applets, for example), it seems to me that it has a bit of an identity crisis. Is it the low level implementation language of the JVM platform, or is it a high level app development language?
> True, if you increment a number, you will typically expect the result to be greater.
I'd say this is a pretty big deal for people who are reasoning about code.
> But how many application domains are there where 2^32 - 1 is really the exact upper limit of the range of valid values? I would think that in most cases catching a overflow would come much too late, because the actual error is exceeding some application-specific limit rather than the artificial limit of the range of int.
Agreed. But I've almost never seen code that actually checks value ranges before and after math operations. And Java doesn't make it easy or efficient to do a "newtype" pattern so that the types actually are limited in any meaningful way.
Instead, most enterprisey backend systems I've worked on just accept a JSON request, deserialize some `quantity` fields to an `int`, and go to town with it.
> Or put differently, I bet 99.9% of ArrayIndexOutOfBounds errors are because indices leave their legal range without ever overflowing int.
I'm sure that's true. But indexing a collection is not the issue I was thinking about.
> I believe Java did this deliberately to avoid the trouble that C and C++ have with signed and unsigned integer types having to coexist.
The problems really only come from mixing those types, and the simple solution is to disallow such mixing without explicit casts in cases where the result type is not wide enough to represent all possible values - this is exactly what C# does.
I think Java designers just assumed that high-level code doesn't need those, and low-level code can use wrappers that work on signed types as if they were unsigned (esp. since with wraparound, many common operations are the same).
> Java-style wrapping integers should never be the default
The ironic thing about this one is that C# introduced "checked" and "unchecked" specifically to control this... and then defaulted to "unchecked", so most C# code out there assumes the same. Opportunity lost.
While we're on the subject of numeric types - the other mistake, IMO, is pushing binary floating point numbers as the default representation for reals. It makes sense perf-wise, sure - but humans think in decimal, and it makes for a very big difference with floats, that sometimes translates to very expensive bugs. At the very least, a modern high-level language should offer decimal floating-point types that are at least as easy to use as binary floating-point (e.g. first-class literals, overloaded operators etc).
C# almost got it right with "decimal"... except that fractional literals still default to "double", so you need to slap the "M" suffix everywhere. It really ought to be the other way around - slower but safer choice by default, and opt into fast binary floating-point where you actually need perf.
> At least Java has the defence that they didn't know how it would pan out. C# has no such excuse in copying Java.
I think both Java and C# did it as an attempt to offer some generic data structure that could cover as many use cases as possible, since neither had user-defined generic types. In retrospect, it was an error - but before true generics became a thing, it was also a godsend in some cases.
My opinion is that a high-level language like Java has no business making me guess how many bytes my numeric values will occupy. It's insane. Since when does Java give a crap about memory space? "Allocations are cheap!" they said. "Computers are fast!" they said about indirection costs. Then they stopped and asked me if I want my number to occupy 1, 2, 4 or 8 bytes? Are you kidding me?
Yes, you should have those types available so that your Java code can interact with a SQL database, or do some low-ish level network crap, or FFI with C or something. But the default should basically be a smart version of BigInteger that maybe the JVM and/or compiler could guesstimate the size of or optimize while running.
Thus, IMO, there should be a handful of numeric types that are strict in behavior and do not willy-nilly cast back and forth. Ideally you'd have Integer, UInteger, PositiveInteger, and a similar suite for Decimal types.
Schemes have done numbers correctly since basically forever.
> the default should basically be a smart version of BigInteger that maybe the JVM and/or compiler could guesstimate the size of or optimize while running.
I suspect this would be disastrous for performance. I believe Haskell uses a similar approach though.
Sometimes you want to store 20 million very small values in an array. Forcing use of bigint would preclude doing this efficiently (in the absence of very smart compiler optimisations that is).
As int_19h points out, the Ada approach lets us escape the low-level world of int8/int16/int32/int64 while retaining efficiency and portability and avoiding use of bigint.
> there should be a handful of numeric types that are strict in behavior and do not willy-nilly cast back and forth
I agree that reducing the number of implicit conversions allowed in a language is generally a good move. Preventing bugs is typically far more valuable than improving writeability. This is another thing Ada gets right.
> I suspect this would be disastrous for performance. I believe Haskell uses a similar approach though.
>
> Sometimes you want to store 20 million very small values in an array. Forcing use of bigint would preclude doing this efficiently (in the absence of very smart compiler optimisations that is).
I suspect that it would. I also suspect that I don't care. :p
We're talking about Java. Yes, you can write high-performance Java and I wouldn't want to take that option away. But look at the "default" Java application. You have giant graphs of object instances- all heap allocated, with tons of pointer chasing. You have collections (not arrays) that we don't have to guess the maximum size of.
If you're storing 20 million small values in an array, then go ahead and use byte[] or whatever. But that should be in some kind of high performance package in the standard library. The "standard" Integer type should err toward correctness over performance- the very same reason Java decided to be "C++ with garbage collection".
I'm also not literally talking about the BigInteger class as it's written today. I'm talking about a hypothetical Java that exists in a parallel universe where the built-in Integer type is just arbitrarily large. It could start with a default size of 4 or 8 bytes, since that is a sane default. Maybe the compiler would have some analysis that sees the number could never actually be large enough to need 4 bytes and just compile it to a short or byte. These things should be immutable anyway, so maybe the plus operator can detect overflow (or better if the JVM could do some kind of lower-level exception mechanism so the happy path is optimized) and upsize the returned value size. Remember, integer overflow doesn't actually happen very often- that's exactly the reason people don't typically complain about it or ever notice it (except me ;)), so it's okay if the JVM burps for a few microseconds on each overflow.
All this doesn't matter because it'll never, ever, actually happen. I just think they made the wrong call and it has unfortunately led to lots of real world bugs. It's hard to right correct, robust, software in Java.
I suspect the performance penalty would be so severe it might undermine the appeal of Java. I don't have hard numbers on this though, perhaps optimising compilers can tame it somewhat. Presumably Haskell does.
A more realistic change might be to have Java default to throwing on overflow. The addExact methods can give this behaviour in Java. In C# it's much more ergonomic: you just use the checked keyword in your source, or else configure the compiler to default to checked arithmetic (i.e. throw-on-exception). This almost certainly brings a performance penalty though.
Yeah, I don't have any real intuition about the performance cost, either. But real-world Haskell problems do fine, as you said. And Haskell has fast-math libraries that, presumably, give you the fast-but-risky C arithmetic.
I also agree that a "more realistic" option is to just throw on overflow by default, the same way we throw on divide-by-zero.
OP mentioned "Ada's approach to types", as well. Ada lets you write stuff like "T is range 1 .. 20" or "T is range -1.0 .. 1.0 digits 18". This then gets mapped to the appropriate hardware integer or floating-point type.
Yeah, I've read little snippets like that from blog posts and stuff, but I've never written a single line of Ada, so I really don't know how that works out in practice.
What happens if you overflow at runtime? A crash, I assume/hope?
My point of view is that this is the opposite of what I'm talking about anyway. Java is a high level language where we are usually writing in Java because we're agreeing to give up a lot of raw performance (heap allocations, tons of pointer chasing) in order to have convenient models (objects) and not have to worry about memory management, etc.
In light of the above, I don't see why the default for Java is to have these really nitty-gritty numeric types. I don't want to guess how big a number can be before launching my cool new product. Just like I don't use raw arrays in Java and have to guess their max size- I just use List<> and it will grow forever.
> What happens if you overflow at runtime? A crash, I assume/hope?
In Ada, if range constraints are broken at runtime, a Constraint_Error is raised (or 'thrown', if you prefer). [0] (That's assuming of course that range checks haven't been disabled, which is an option that Ada compilers offer you.)
> I don't see why the default for Java is to have these really nitty-gritty numeric types
At the risk of retreading our earlier discussion:
I think the short answer is performance. Java has lofty goals of abstraction, yes, but it also aims to be pretty fast. If it didn't, its appeal would diminish considerably, so it's reasonable that they struck a balance like this. Same goes for why primitives aren't objects.
It depends on the base type - you can get the traditional unsigned integer wraparound behavior, too. But Ada is very explicit about this, to the point of referring to them as "modulo types", and defining them using the mod keyword instead of range:
Think of range of permissible values as a contract. I agree that the default should be "no limit", but there are many cases where you do, in fact, want to limit it, that have nothing to do with performance per se - but if the language has direct support for this, then it can also use the contract to determine the most optimal representation.
Your analysis is interesting and far from exhaustive, It would be nice to have a collaborative feature matrix for languages, on github.
Kotlin solve the following points:
null - we all know, so I'm not going to bother expanding except to say that @NotNull is NOT a solution and it doesn't guarantee shit.
I don't hate checked exceptions as a concept, but the fact that you can't be "generic" over the exceptions in a function signature is frustrating. This is why everyone says they're "incompatible" with Java's closure APIs.
No unsigned ints.
No concept of `const` or immutability.
Kotlin allow to specify either variance or contravariance so I guess it fixe this point too?
* The fact that arrays got type variance wrong*
* interfaces are extremely lacking compared to type classes and lead to verbose and cumbersome patterns such as "Adapter".*
Interesting, can you link to an example?
Kotlin has first class support for delegation as an alternative.
* Type-erased generics. Why shouldn't I be able to implement a generic interface for multiple types? E.g., class MyNumber implements Comparable<Integer>, Comparable<Short>, Comparable<Long> {}*
Kotlin has support for reified generics to some extent.
The JVM is currently getting state of the art support for it too and for specialization.
* Dates and timezones are still insane, even in Java 8+.*
I always hear that Java has got the best Time library, what is the complaint about?
* JDBC sucks. JPA also kind of sucks.*
Do yourself a favor and use the JDBI or so many other sexy alternatives.
Of all your points the only unaddressed are:
* interfaces are extremely lacking compared to type classes and lead to verbose and cumbersome patterns such as "Adapter".
And
Silent integer overflow/wrap-around*
Strong agree about stockolmization and cargo cultism of language constructs.
Most programming languages are pretty awful Correct, hence why Kotlin stands out.
Kotlin does not fix the issues with checked exceptions. It gives up and gives us nothing for error handling. So, for all the beauty and magic of a strong static type system, I have absolutely no idea if `fun foo(i: Int): Int` is just going to crash my program when I give it -1.
"Do you have a fatal error? Throw an exception."
"Do you have a non-fatal error that's totally expected as part of your API? Throw an exception."
"Do you want to cancel a coroutine? Throw an exception."
"Do you want to define a class and validate the parameters you pass to the constructor? Screw the type system! Write an init {} that throws an exception!"
Kotlin also doesn't really "fix" Java's lack of unsigned ints. The implementation that Kotlin provides is really poor. That's partly because of Java's lack of unsigned ints, so it's not entirely their fault, but it's a really bad API and going between signed and unsigned ints is very bug-prone. They also don't work with serialization libraries because they're implemented as inline classes, which don't work with serialization libraries.
Kotlin doesn't have the issue with array variance because it doesn't really have arrays like Java has. So that's good.
With respect to interfaces vs. type classes. The issue is this: let's say you're writing an API and you decide that you want to define an interface. Let's define it as `interface MaybeEmpty { fun isEmpty(): Boolean }`. So in your API, you might have some function like `fun foo(maybeEmpty: MaybeEmpty) { /* do something with maybeEmpty.isEmpty() */ }`
See where this is going? You realize: "Hey! I want to implement `MaybeEmpty` for a bunch of types to use in my function."
How do you implement `MaybeEmpty` for `String`? What about `Collection`? You can't. What you have to do in Java+Kotlin is define at least two new classes: `class MaybeEmptyStringAdapter(val value: String): MaybeEmpty { override fun isEmpty() = value.isEmpty() }` and `class MaybeEmptyCollectionAdapter<out T>(val value: Collection<T>): MaybeEmpty { override fun isEmpty() = value.isEmpty() }`.
Then when you actually want to use a String or a Collection in your fancy code, you have to write:
val s: String = getSomeString()
foo(MaybeEmptyStringAdapter(s))
With type classes, which exist in Scala, Rust, Swift, and Haskell, you can extend types with interfaces AFTER their definition. So I could implement MaybeEmpty right on String itself and then just pass the String value right into my function. No extra classes, no extra wrapping, no performance overhead.
Kotlin has extension functions, which are a neutered version of type classes.
Kotlin does not support reified generics in classes. They can only be used in inline functions because it gets transpiled into the equivalent of `fun <T> foo(clazz: Class<T>)`. The JVM is not going to get reified generics any time soon. It's been in the works for years and years and will probably break things.
Java's dates and times depend on a global, mutable, timezone setting. The datetime classes use it pervasively. Dealing with Calendar is awkward as heck. The whole TemporalAccessor interface is madness- you never have any idea what method is going to throw an exception for a given implementer. JDBC's dates and times are utterly broken because they use the old Java Date class and it will never change.
JDBI relies on JDBC, so I'm not convinced it actually fixes the problems other than having a nicer API. I stand partly corrected: "By default, SQL null mapped to a primitive type will adopt the Java default value. This may be disabled by configuring jdbi.getConfig(ColumnMappers.class).setCoalesceNullPrimitivesToDefaults(false)." So at least it's only wrong by default...
So, it seems to me that Kotlin actually addresses only two things on my list of complaints: null and array type variance.
EDIT: I accidentally forgot about immutability. Kotlin doesn't have that either. `val` doesn't mean "immutable", it means "not reassignable". I can 100% mutate the ever living crap out of:
But, I was complaining about the 8+ DateTime API. Now, granted, it's WAY, WAY, better than the old API. And it's much better than what's available in at least several other languages (cough JavaScript with no time zones at all).
It still depends on a global, mutable, time zone variable. It uses it in several places where it's kind of hard to get it to just use a given time zone.
The API is also just really hard to discover and use correctly. Look at what classes implement TemporalAccessor. It's not easy at all to guess which methods will throw an exception for a given implementer. Further, what if your function is just given a TemporalAccessor? I have NO IDEA what I'm actually able to call on the object without it exploding.
Calendar is awkward and hard to use.
Switching around between TimeZone, ZoneOffset, and ZoneID is pretty cumbersome. Some stuff takes Strings, some take ZoneID, etc. Some things that seem to take ZoneID don't seem to be able to take a custom TimeZone (not that I think I ever really needed that- I was trying to do something hacky IIRC).
A lot of methods take Long, which you have to just read the docs to know if that's seconds or milliseconds.
It's just kind of a bleh API.
Then you mix in the fact that JDBC is forced to use the old Date classes and that messes everything up because of timezone crap.
It's def not the case, I spend too much time here and on Reddit and people are always complaining about Go ( generics, errors, type system etc ... ), if you want the godly language it would be Rust, anything about Rust will be upvoted.
As for Java, it's a good language / runtime that is overly complicated behind layers of abstraction. Take Spring for examnple, magic everywhere, add an anotation there and it does x.y.z, you can't see it in the code but it does something.
The hype around Rust is largely justified. There. I said it.
Rust is nowhere near perfect. It's also supposed to be a systems language- it was probably not originally intended to replace languages like Java for general "app" development.
But it's so much better of a language than most of the higher level languages you see in popular use: Java, PHP, JavaScript, Python, etc, that people are actually willing to deal with the lack of garbage collection just to get to use its excellent type system and well-designed standard library API. I think that says a lot about Rust and the programming language environment today.
It surprises me that we don’t yet have an applications language with an ultra-modern type-system. It’s so strange that the current leaders are a scripting-transpiler (TypeScript) and a systems language (Rust) - but not an apps language in-between.
...then again it’s understandable when you see how the traditional apps languages (C#, Java, etc) are severely hobbled by their VM/runtime, because that’s usually the source of constraints on their type-system. Kotlin and F# do some neat tricks to work-around the limitations they inherited from the JVM and the .NET CLR respectively, but I think we’ve reached the limits of what platforms originally designed ~25 years ago can reach. I just don’t see the JVM nor the CLR getting anything like first-class support for higher-kinded types or true algebraic types: there’s too much pressure from establishment banks and insurance companies not to break their decades-old codebases maintained by outsourcing companies. Languages like Swift and Go are free to break their own molds because they’re happy not leaving a legacy, but what will that mean for Rust? Systems code is the last place you need major breaking changes, but I don’t see Rust’s progress slowing down the way ISO C++ languished for almost 20 years. Hmmm.
The situation is changing in the CLR land, now that they can just say that legacy codebases can stay on .NET 4.x, and .NET Core is where all the fancy development is happening. There are already some C# language features that are Core-only because they require the corresponding runtime changes.
The other thing is - people always forget that, unlike JVM, CLR has more than just the object layer. By design, it has enough low-level constructs to compile a language like C++, complete with multiple inheritance, unions, varargs etc. Obviously, you can do pretty much anything on top of that - the only problem is that you won't be able to interop with other .NET code, except through C-style FFI. But, hypothetically, it would be possible to establish a higher-level ABI without baking it into the VM as an object model, thus allowing a reboot without throwing everything away.
> By design, it has enough low-level constructs to compile a language like C++, complete with multiple inheritance, unions, varargs etc
But the CLR doesn't support that. If you compile C++ code to IL then you'll get a compiler error if you use any types that use multiple-inheritance. The CLR's underlying type system is a huge limitation when it comes to using even simple modern ADTs.
For example, in F# you can define a union type, and the union subtypes can contain normal library types, but you cannot define a library type as a union subtype, whereas you can in TypeScript (and Rust too, I think?).
You're confusing managed C++ with C++/CLI. The latter tries to introduce additional constructs to C++, so that it can partake in the CLR object model - and there you get all those limitations like no multiple inheritance. But you can, in fact, compile any random C++ code to IL - just run cl.exe with /clr:pure. The only thing that doesn't work in that case is setjmp/longjmp; everything else is available.
If you want to see it for yourself, take some .cpp file, and compile it with cl.exe /clr:pure /O2 /FAs. The latter switch will dump the IL assembly into the corresponding .asm file. Or you can inspect the output with ILSpy etc. You'll see that it compiles native C++ types down to CLR structs with no fields, but with explicitly set size (via StructLayout.Explicit); and then uses pointer arithmetic to access field values.
It was deprecated and will be removed eventually, but for now, it's still working fine.
Anyway, the point is that it's possible within CLR constraints. There's just not much demand for it, and it's effectively a whole separate CPU arch for the compiler to support, in addition to x86/x64/ARM.
It surprises me as well. And to belabor my own point some more: there's a reason the #1 question about Rust from newbies seems to be "Is there a language like Rust, but with a garbage collector?" - sometimes reworded as "Is there a way to turn off the borrow checker?"
To be fair, I think that a decent amount is possible on JVM and CLR. Scala, for as much hate as it gets, has a much stronger type system than Kotlin/Java.
And Swift has its own backwards baggage! It has to work with Objective-C. There's actually a few weird things in Swift that I've bumped into that I'm pretty sure are only there because of Objective-C.
The Rust devs certainly take backwards compatibility very seriously and they guarantee backwards compatibility forever. There will be no breaking changes in Rust except in extreme cases of finding unsoundness or whatever.
There are currently ZERO good high level app-building languages, IMO.
Rust isn't it because of the lack of garbage collection (which is not a point against Rust- just for the "domain" of optimizing for user-space app development).
Swift would probably be it, but it's pretty much Apple-only and I don't expect that to change.
TypeScript is close, but it's held back by needing to work with JavaScript and the JavaScript standard library and ecosystem suck. It also can't do threads.
Kotlin/Java are decent, but not good at concurrency, and having things like different sized integer types is really silly for a high-level language where everything goes on the heap anyway...
Python is slow and can't do threads.
Some languages like Lisps and MLs might be good for building apps if they just had the ecosystem around it. Haskell would be a giant pain the ass because real apps are full of "IO".
So, yeah. It's kind of a miracle that we can get anything done. I guess that's why we get paid so well...
Kotlin is the application language you're looking for (and Scala 3 to a lesser extent).
Contrary to what you say it is the best language I've ever used for concurrency. It has it all, structured concurrency, cancelation, Flow, transparency (no await), etc.
Regarding your second point the JVM is increasingly using the stack and with the soon complete generics you'll be able to avoid the boxed versions of the primitive types (though they remain useful when you want identity)
> Contrary to what you say it is the best language I've ever used for concurrency. It has it all, structured concurrency, cancelation, Flow, transparency (no await), etc.
I disagree.
Have you ever tried to actually implement something non-trivial that takes advantage of structured concurrency with cancellation? It's pretty hard to do correctly. Can you really tell me off the top of your head what the difference is between `withContext(coroutineContext) {}` and `coroutineScope {}` from within a suspend function?
Coroutines use unchecked exceptions for control flow. Kotlin also uses unchecked exceptions for fatal and non-fatal error handling. Figuring out how all these things interplay when it comes to coroutines and suspend functions has some subtleties that, IMO, are very difficult to figure out from just documentation and blog posts.
Also, Kotlin's standard types are entirely unsafe to use concurrently. The fact that MutableList inherits from List means that a function that accepts a List parameter CANNOT assume that the list wont change while the function is executing. So if you write `if (list.isNotEmpty()) { doSomething(list.first()) }` - that's a race condition because the list can literally become empty between the if clause and the body.
"But, wait! You should have just been smart enough to make a copy of your List before sending it between threads/coroutines." Okay, great. Let's do full copies of potentially-large collections. Thank goodness Kotlin is so concurrency ready that the standard collection types are persistent .. colle..ctions... oh.
Kotlin's concurrency story is really not that awesome. Scala is better, but still not perfect. Clojure is better still. Rust is good. Elixir (or anything with some kind of actor framework, I guess) is good. Haskell is good.
But I agree, overall, that if I had to pick a best app language today, it's either Kotlin or Scala, or Swift if you're writing for Apple stuff. I'll admit that I have a glaring experience gap with .NET languages, so I can't honestly say anything about C# and F#.
> I'll admit that I have a glaring experience gap with .NET languages, so I can't honestly say anything about C# and F#.
C#/.NET comes with language-level mutexes (`lock`) and the .NET library has thread-safe generic collections (ConcurrentDictionary, ConcurrentBag) and true immutable collections (ImmutableArray, ImmutableList, ImmutableDictionary) with optimized copy operations (e.g. ImmutableList.Add is O(1), but ImmutableArray.Add is O(n)). It's a nice addition to the library with only a few warts.
Java/Kotlin also have mutexes- just not as language built-ins (well, it does have
`synchronized`).
They also have a ConcurrentFoo set of collections as well. And actually an ImmutableMap (but I don't see ImmutableList, etc. Why?).
The "problem" is that they're opt-in.
I spent years writing multi-threaded C++. But I've become very spoiled with modern languages that make concurrency safe(r)-by-default, such as Rust, Clojure, and Elixir (I haven't actually used concurrent Haskell).
Honestly, it's not anywhere near as bad as it used to be. The ecosystem(s) have embraced immutability everywhere that it's possible, so you don't have to worry quite as much about accidents.
.NET doesn't have an ImmutableList either (the ImmutableBag type is an unordered collection). This is because unlike with hash-tables you always need to lock the entire structure when mutating a List/Vector (with hashtables you only need to lock the specific bin/bucket).
Honestly, I truly don't know and I don't feel qualified to judge the merits of the two approaches. There could be performance or behavior implications that they just prioritized differently from how C# does it. Or maybe something to do with the Java underpinnings. Are C# promises/whatever cancellable?
But, as an "end user", the exception thing drives me absolutely nuts. It wouldn't be nearly as bad if Kotlin either didn't use exceptions for ALL error handling, or even if it had checked exceptions so that "normal" errors would be, mentally, separate from fatal errors and/or coroutine cancellation.
> Kotlin/Java are decent, but not good at concurrency
Since when JVM languages aren’t good at concurrency?
Also, primitives are usually not heap-allocated, so I don’t see what’s the problem with it. It makes the JVM a beast when it comes to number crunching.
> Since when JVM languages aren’t good at concurrency?
Since it's trivially easy to accidentally mutate data across concurrent contexts (threads). You have to actively remember to reach for Mutexes. You also have do understand how `synchronized` works and remember to actually use it. If you use a class that someone else wrote, you have to dig into their code (if available) to make sure they made their class thread-safe.
> Also, primitives are usually not heap-allocated, so I don’t see what’s the problem with it. It makes the JVM a beast when it comes to number crunching.
You don't need a number crunching beast to write most application-level software. That's my point. It's great for number crunching that you have byte, short, int, long as separate primitives. You know what most applications actually want? A number that wont magically wrap around to negative-LARGE_NUMBER when you guess the maximum size wrong. Java has fixed-size arrays, too, for super-performance mode. But applications just use List<> that has no size limit that you have to guess. It should do that same for Integers.
Java itself is most definitely not overly complicated. Many frameworks, and created programs are due to enterpriseTM software development, but it’s orthogonal. And spring is a really feature-packed framework which has a solution for almost every business problem that one can face. Of course it comes with a great deal of complexity, but it is a tradeoff. I rather tinker with how to use a given feature than implement it myself, often in an inferior way.
> if you want the godly language it would be Rust, anything about Rust will be upvoted.
That is just better half part, sad part is anything negative about Rust will be heavily downvoted. Most of the time I disagree with Java/Go in their thread it would be just fine but not with Rust.
You can just stop using Spring. There's plenty of good web libraries out there. Javalin, Spark, Micronaut, Quarkus, heck I'd probably even take JEE over Spring nowadays.
I got burned out by endless JVM preening and devs configuring their runtime options to use 128MB in a 2GB container, or capitalizing their -Xm option wrong, or what have you. So now with cloud computing you have the JVM in a container on a virtual host, all of which have their own constraints to set—matryoshka dolls all the way down.
Like, people always say, "Well that just means they didn't know what they were doing with the JVM"—yes, and it's been a problem for two decades and is about as likely to go away as buffer overflows in C.
Honestly the thing I like about Go, Rust, C++ the most is the resultant binary just runs in userspace, and you can set the constraints there. Even Python scripts don't have to muck with the JVM.
FactoryFactoryFactoryFactory code is the least of Java's issues.
Starting with OpenJDK 11 (and I think available in OpenJDK8 with an optional flag), the JVM will use the memory constraints of the container, so no more -Xmx flags and so on if you are running the JVM in a container environment.
Not sure what most people think about the var keyword, but I have to say that I dislike it in a production codebase. The example from the article uses var instead of String, only saving two letters, and for classes with longer names var still has the disadvantage of making the code less readable; either you're looking at the code in an IDE and you'll have to click to the method to find out the return type, or you're looking at it on GitHub, etc. and you have to do a manual search to find out the return type. But I'm open to being persuaded otherwise :)
Is it in the realm of possibilities that Java will someday have runtime generics support (instead of type erasure)? It's the most frustrating aspect of the language because you can't use basic Java features like method overloading with them. Also, I don't understand how people use the `Optional
<ErasedType>`..? Is there a way to use method overloading with it? `ErasedType` could be _anything_ at runtime, doesn't sound fun. Besides all references types are already nullable/(optional)...
so what's the point?
> Also, I don't understand how people use the `Optional <ErasedType>`..?
I may be misunderstanding your question, but in general you wouldn't be doing anything type-specific with an Optional's value unless you were in a part of your code that knows the type at compile time. For example, a data structure might return an Optional<T> from a get() method, but all it's doing is returning the T or an empty.
> Besides all references types are already nullable/(optional)... so what's the point?
1. It's nice to explicitly say if a value might not exist than to be unsure if a null object is meaningful.
2. It can force consumers of an Optional to deal with the empty case (versus forgetting to null check)
3. Null might mean something like "this value was accidentally never initialized" versus "I am explicitly indicating an empty value"
4. It can give better developer ergonomics with convenience methods or monadic style (such as with Scala's Option type, where you could .map() on an Option, or merge multiple Options together)
When exactly would you benefit from overloading based on a generic type? How come it is never brought up with Haskell for example, or the other majority of languages that employ type erasure?
You might want to have something like `String concatenate(List<String> strings)` and `List<T> concatenate(List<List<T>> lists)`.
Java won't let you do this because types are erased at compile time, and Java doesn't know exactly what function it's going to call until run time (in order to allow dynamically loading classes). Haskell does allow something similar to this kind of overloading, because Haskell determines what function is going to be called at compile time, before type erasure happens.
With checker framework, you can basically have it (and much much more, I believe no other language comes close to that level of static analysis). It is annotation-based and not a fancy keyword, but it is there.
Higher kinded types do make some features obsolete, but checker has MapKey checker, Lock and Index checker, and while locks may not be needed that often with everything is immutable by default mode, concurrency is hard and that alone is not always enough. While index errors are not as frequent in Haskell, due to (Head:Tail), but if you do use Array, you can’t express indexing safely without dependent types.
I’m not saying that it is better in every aspect than haskell (not too familiar with ocaml), but java is an easy and extremely popular language. And thus it has really really great tooling, and with annotations it is really extensible.
And if static analysis is not enough, there is also JML.
Also, checker taps more into intersection types, which is not really expressible in Haskell, but I’m yet again not knowledgeable enough on this latter, correct me if I’m wrong.
What's the best way to run java apps in a container nowadays?
The official openjdk images are based on "oracle Linux" and that doesn't sound right :) is there good small and well supported java image based on a popular distribution?
nice to see everything in one place with short example code. Specially helpful for those still using Java 8 and looking to upgrade, Java 17 will be a LTS so that will be a good opportunity to upgrade.
I've been using Java and the JVM since about 1995 when Sun released the early betas (pre 1.0). It's more or less a convenience marriage. Right place and right time I guess. I happened to specialize in it for a while. A long story short: I pivoted to Kotlin a while ago and stopped caring about what Oracle is doing to keep up. Too little, too late; and it seems to perpetually not matter at all unless you are still doing Java, which I'm not.
Kotlin works great on the JVM and about the same across Jdk 8-16 (probably from v6 even). A lot of stuff happened in between but the Kotlin compiler and library ecosystem seems to cover up and smooth out the differences. Most of the Java compiler or standard library changes stopped mattering to me. There's not a single thing from v15 to v16 that I can name that matters to me, at all. All my code works fine with v11 (the first and only post v8 LTS release) and the delta with getting it to work with v8 is trivial. There's just nothing since about 2014 that actually matters to me. Not even a little bit. All of my Kotlin code bases trivially work with any release since then.
I use Spring and Spring Boot in their latest versions and there's no question about Kotlin being the preferred language for that at this point. You get basically everything they did to make it work with Java 8 plus everything that Kotlin adds on top, which is a lot. Some of that even works with recent Java versions.
At this point that includes stuff like co-routines and Flows that makes the whole abomination that is Spring Flux usable (as opposed to a minefield of complexity you have to deal with via 2014 era Java), data classes instead of last century style Java Beans (just, no!), etc. Java 9 and on-wards is kind of half getting there with things like Loom and records landing only very recently and not havig had any impace yet whatsoever. You want something modern, fresh, and nice with Spring that is not experimental and well supported: Kotlin is it. Everything else is stuck at being experimental until way after Oracle labels latest & greatest Java version as LTS, which might happen in the next two years or so. Java 11 technically happened but never really registered in the same way that Java 8 did. For a lot of conservative places, 8 was the last stable release. That was in 2014! Everything after that is optional, nice to have, and yet to impact major frameworks. Except of course if you use Kotlin, which fast forwards you about seven years plus whatever Java has left to catch up.
There really is no functional difference between JDKs other than slight changes in Java APIs you probably should not be using with Kotlin anyway. All the important stuff is tested with and works fine with v8. I'm talking libraries, tools, and frameworks. Most of my code works exactly the same with Java 8 as it does with Java 16. I get very little value out of using something more recent than jdk 8. But I still upgrade. Kotlin uses some of that transparently but mostly it does not matter, at all.
We are basically talking purely non functional things that are mostly about things like performance being slightly better, new garbage collectors, or the whole modules thing that is largely irrelevant to any kind of server side usage having happened or not. I've never been on project where that actually mattered even the slightest. Complete waste of time that pre-dates the acquisition by Oracle even. But nice that they cleared some technical debt there and got rid of some circular dependencies in a standard library that I largely don't depend on with Kotlin.
Which is a different way of saying that I get very little out of Java updates in the last seven years. My mode is mostly that, well if you are going to pick a JVM, you might as well pick a recent one because it probably has a few fixes that may or may not matter to you. But it's going to work fine either way.
The next big thing with Kotlin is multi-platform and cutting loose from Oracle. I'm already utilizing kotlin-js for some front end stuff and am keeping an eye on their WASM efforts and native stuff as well. A lot of what I care about already works fine across all of those.
Java 8 was in 2014. Go was barely 2 years old. Many of the features presented here were considered common in other languages at the time, so if anything Java is late to the party.
As someone unfamiliar with C# but interested in how languages impact each-other, could you provide some examples? I know C# was heavily inspired (perhaps not a strong enough word) by Java, but not as familiar with the opposite.
Java does seem to absorb from tried and tested ideas. There are a few major features I can think of that have largely come from or been inspired by popular and mature 3rd party libraries (time, reactive streams and various collection apis come to mind).
C# and Java are kind of in the same space, they tend to pick up the same kind of features. But not always in the same way e.g. Records are different between the two languages. As is the path to Async.
Java benefits from more experimentation in the JVM and bigger variety in implementers.
I need to find the funny slide deck by Brian Goetz (java language architect at Oracle) where in slide one (what industry thinks he does) he shows his job is to copy from C# and in slide two he shows what academia thinks he does, copy from scala ;)
Java employs the “last mover advantage” strategy. They wait a bit for other mainstream-ish languages to be a testbed for a new feature, and if it proves worthwhile, they try to incorporate it into the language. But a new feature is a huge responsibility, since it will have to be maintained “forever”.
C# on the other hand is quite brave and quick to implement everything under the Sun, sort of similarly to c++ - which sooner or later will create a really bloated language (imo, both are already quite complicated). And in backward compatibility, Java definitely wins the cake.
In the list provided in the article: record syntax, switch expressions and var.
And by the way for the downvoters: there wasn’t any snark, C# itself being at first a copy-pasta from Java, and now taking heavy inspiration from F#. That kind of features transfer in-between languages is a good thing.
They broke some dependencies that depended on JVM internals, that was never promised to not change (how could they implement anything otherwise?). They implemented a strong encapsulation by default so that in the future, such a step won’t be necessary ever.
What backward compatibility did they break except for internal com.sun packages that weren't part of the JDK to begin with but only Sun's implementation?
Progress can seem glacially slow at times - it’s always disappointing to see your favorite upcoming feature get pushed to a later release - but the current language is undeniably light years better than what we started with.
Respect.