Hacker News new | past | comments | ask | show | jobs | submit login
How much faster is Java 17? (optaplanner.org)
340 points by vips7L on Sept 15, 2021 | hide | past | favorite | 247 comments



Considering tons of folks are still on Java 8, I would have liked to also see comparison between that and Java 17. Based on the improvements shown from Java 11 to 17, one could expect even larger improvements from Java 8.


I'm surprised that it's now common for software to require a certain (old) Java version.

Wasn't the whole "write once run anywhere" promise about backwards compatibility too?


Yes. But in the move to java 9 they broke java’s customary backward compatibility and left behind a lot of users.

It doesn’t help that there is no good, clear and complete guide on how to upgrade SOAP clients.

I went through this recently and learned that because jakarta uses multi-release jars, we have to do the regular dependency changes and also change our fat-jar based build/release to Docker images. In other words, they decided to throw out decades of users’ investment in learning the ecosystem.

I’m not surprised that people seem to be leaving the ecosystem.


> But in the move to java 9 they broke java’s customary backward compatibility and left behind a lot of users.

The main backward incompatible changes between 8 and 9 were the changing of the version string to remove the "1." prefix, and the removal of a handful of methods hardly anyone had used. In other words, the chances of spec-compliant code that ran on 8 failing on 9 were slim. What happened was that in the long 8 timeframe, many libraries -- for various reasons, some reasonable -- have circumvented the Java spec and hacked into JDK internals, making themselves tightly coupled to 8. When internal classes changed in 9, those non-portable libraries broke, and so did their clients. Now that strong encapsulation is finally turned on (as of JDK 16), this shouldn't happen again.

There were some significant breaking changes to the spec in 11, but those comprised separating modules from the JDK into external dependencies, and didn't require any code change.


I remember there were a lot of libraries that were part of the jdk that got decoupled and no longer included in the move from java8 to java9. I specifically remember this impacting anyone who parsed xml or json. I vaguely remember it being something in the javax.validation package.

My company migrated from 8 to 11 but we had a lot of headaches around those libraries that were pulled out of the jdk.

To be fair, those should not have been coupled to the jdk in the first place, but it did break backwards compatibility which was a cardinal sin for java.


For a lot of people, Java is mainly used to turn XML files into stack traces, so breaking backwards compatibility in XML parsing is a big deal!

Although, if it gives you a stack trace even faster than before, I guess it could be considered a performance improvement...


XML parsing works just fine. It's SOAP and some other classes that were dropped.

I'm not sure if that's a real problem. All it takes is to add few dependencies to pom.xml.


It mattered to us. Our team owned over 50 microservices that all had to have their pom.xml files updated.

But it was worth it. Got us access to the java flight recorder which is an awesome debugging tool.


things like jaxb were removed, which i'm guessing plays a part in the soap discussion above. Not that it is difficult to fix, but...


Unfortunately, the only explanation I found for my troubles with SOAP and java 11 is found in the comments to this answer in StackOverflow: https://stackoverflow.com/questions/58319199/java-lang-class...


> the removal of a handful of methods hardly anyone had used

when we moved from java 7/8 to java 11 two years ago, we didn't had any issue with our third-party libraries, but we had a few pieces of code, mostly related to encryption libraries, that failed to compile with the newest JDK


The way to upgrade SOAP clients is to generously pour the office with gasoline and set it on fire.


It's a SOAP so flush down the toilet should do the trick.


What are your gripes with SOAP? Genuinely interested. I've used it once professionally, and while there was an initial learning curve, it seemed to have some nice properties once you got past that.



That was exactly my sentiments back in 2000s. I had to use SOAP a few times, then switched to a semi-proprietary binary protocol called Hessian [1] (and its XML based sister, Burlap [2]) and it saved my day.

[1] http://hessian.caucho.com/

[2] http://hessian.caucho.com/doc/burlap.xtp


This is a ridiculous website:

http://harmful.cat-v.org/software/

They should be using rocks instead of a computer with this mindset.


What does the content on the rest of the site have to do with the link I posted? This is literally ad hominem - disregarding an argument because you dislike the author.


I dislike the author based on his/her writings.

Why would it be ad hominem (which is not even that)? Also, the link I posted is even about the same topic, not unrelated - so a logical flow in that one (and it has plenty) will easily apply just as well to an elaboration of one of the listed contenders on this page.


dismissing the argument based on your feelings about the author is literally ad hominem.


thank you for this


Just like with XML, it's suffers from being about 300x more complicated than is proper for what is does


I really like Java the language but I feel everything involved in working in Java is 300x more complex than it needs to be. (Even with a IDE)

Just getting code updates on our old java websites makes me realize why all the new stuff is arguably slower php and python..


SOAP isn't a Java tech.


My experience:

It's very easy to overengineer. Both the SOAP WS-* extensions, and the underlying XML.

The tooling and ecosystem suck. I found it a lot more difficult to use clients like soapUI as opposed to just banging on a REST API with curl. The libraries are mostly java, with at least one c/c++ implementation. I had a lot of trouble getting different libraries to work together. And that's literally the entire point of SOAP. It's supposed to be the enterprise grade interoperability (of course, a huge oxymoron) compared to REST.


SOAP is fine if you're talking to yourself or otherwise confined to a single dialect. But that's not saying much since that's true of any data format. SOAP falls down when trying to get systems all developed by n different companies all talking to one another which was/is the whole point of the stupid thing.


Word.

As long as you're running the same code on both ends it sort of works. It's still over engineered to the moon and back, and a major pita to deal with, but at least the damn thing works.

Anything beyond that fails in spectacular ways.

It will not be missed.


The people responsible for inflicting SOAP upon us should suffer.


Was there a lot of competing standards in SOAP, leading to incompatibility between libraries? That would indeed seem to defeat the point, if true.


But SOAP endpoints can be autogenerated (as opposed to REST), which basically makes this much less of an issue.


SOAP and its XML schema and result is 100x more complicated than JSON. The problems outweighed the benefits.

I spent a few years working on SOAP for a telecom. Not pleasant.


>SOAP and its XML schema and result is 100x more complicated than JSON. The problems outweighed the benefits.

Please mention a few of these problems, I'm curious to hear.

The problem I see with REST/JSON-APIs is that they lack features that have to be tacked on after, creating an endless bikeshedding nightmare and instead of having one well-thought out solution, you get three or four solutions sloppily hacked together. Schemas are a prime example, actually: SOAP appears to come with them out of the box, while REST/JSON-APIs either lack them (just about as bad as not having any type system in your language, i.e. really bad) or tack them on after the fact with something like OpenAPI, which is honestly not great as far as schema languages go.

Again, I say this as someone who has professionally used SOAP only once, and very briefly at that. Not advocating that we should re-adopt SOAP - I think that ship has sailed since long ago - but I really want to understand its opponents opposition to the positive qualities it appears to have.


I used it for about 3-4 years at a major telecom and a large retailer a while back.

> Schemas are a prime example, actually: SOAP appears to come with them out of the box, while REST/JSON-APIs either lack them (just about as bad as not having any type system in your language)

The problem with SOAP is that it seemed to be designed by multiple committees with different agendas. From my imperfect memory, it's not just different versions of SOAP you have to contend with, but also different variants of schema flavors. Consequently, different languages and even libraries would have implementations that might support x but not y schema feature. It was an annoying compatibility nightmare, where you needed an additional complicated tool to verify it all.

Yes, JSON/REST have their own issues, but it's nothing that good documentation can't solve, and it's supported across most if not all major programming languages. Simplicity is often very underrated.


This is from 10+ years ago, but my biggest gripe with SOAP was that if you didn't have the exact same library on both ends then you have problems. Apache Axis vs CSF vs soap-ws, and that's just Java. Java to .NET was almost impossible to pass around.


That would indeed be a huge downside - the point of a specifiation would be to have interoperability be entirely seamless. If SOAP failed on that account, then it deserves its fate.


If you use the "Simple Object" abstraction, it's just fine.

Delving under the XML hood can be painful.


Publishing some flavor of SOAP services seems to be easy and common in companies that live in the Microsoft ecosystem.

I’m no fan of that, but unfortunately we are forced to use some such SOAP services.


Hmm, fat jars still work OK in latest Javas. What is the issue is with Jakarta multi-release JARs? A MR JAR is just a regular JAR with files in different directories. They should fat-jar together OK.

There's certainly no requirement to start using Docker images!


The problem I've run into with mrjars is that they tend to break libraries that do a bunch of classpath scanning. Mrjars add some stuff to the classpath that the older tools didn't understand and would choke on.

It's one of those things I would personally argue is a naughty hack that should be avoided if at all possible, but it's also something that's historically been ubiquitous within the Java ecosystem. It's frequently how convention-over-configuration dependency injection (as found in Spring Boot or Jersey) tends to be done, for example.


Specifically, I was unable to find the gradle shadow/maven shade rules needed to use the jdk9+ version of the multi-release dependency "com.sun.xml.ws:jaxws-rt:2.3.5".

The reply I got on Stackoverflow from the person I think is the maintainer is "don't use fat jars", which is probably the correct solution, although most people use fat jars.

Lately, I've been reading that layered docker images should be a faster way to build and deploy java apps that have many tens of MB of dependencies that never change. It only works if you don't use fat jars.


> It doesn’t help that there is no good, clear and complete guide on how to upgrade SOAP clients.

> I went through this recently and learned that because jakarta uses multi-release jars, we have to do the regular dependency changes and also change our fat-jar based build/release to Docker images. In other words, they decided to throw out decades of users’ investment in learning the ecosystem.

Could you clarify what you ran into? Why docker? I'll have to do this soon.


Specifically, a ton of used-to-be-included in the standard JDK things like nearly all XML processing are now broken out into modules or require maven dependencies, etc.

So it's not "turn-key" to upgrade to jdk 9 or above, like say, 6 -> 7 -> 8 was.

Sounds simple... "just add it to your maven deps!" - but in practice it's more complicated than that and requires careful planning and testing. Some things might even surprise you and run for a while before a classloader can't find something and explodes in runtime.

Java 9 created quite a mess. Once you finish that upgrade though, moving into Java 11 or anything newer is basically turn-key like it was before. But, this had the effect of many companies staying with Java 8 until forced to upgrade.


from OP > Could you clarify what you ran into? Why docker?

Not sure I follow why you had to turn to docker

> Some things might even surprise you and run for a while before a classloader can't find something and explodes in runtime.

The JVM is deterministic - I don't follow this statement?


> Not sure I follow why you had to turn to docker

I didn't, and OP could have stuck with Java8 since it's LTS. So I'm not sure either where Docker comes into play. It seems the parent was deploying fat jars, and now due to the complications of all the various deps, they opted to use Docker images as a new "fat jar". Perhaps it simplified their build process, but that's just a guess.

> The JVM is deterministic - I don't follow this statement?

Custom classloading simply requires a string path and FQN of the class to attempt to load it from disk. Compile time checking doesn't validate the actual existence of the class, which is the point of runtime custom class loaders.

A lot of plugin loaders are done this way, etc. So... your program might be humming along just fine until it classloads in a plugin (or whatever) that depends on Jaxb for example, then everything explodes since Jaxb is now a dep instead of built into the jdk.


Sure but that's always been the case.

Anyways, I had read your comment as: ~"Classloader loads class X fine one moment and then suddenly can't" which is why I mentioned deterministic.


> Sure but that's always been the case.

Well, no it hasn't. Things like Jaxb, for example, have always existed in the JDK since they were introduced (java 1.2 in Jaxb's case). XML processing code compiled with jdk5 (circa 2004) still worked fine on java8, for example, with zero code or dep changes. Suddenly that assumption is broken with java9.

> Anyways, I had read your comment as...

It was just an admittedly contrived scenario where the upgrade path to jdk9+ wasn't as straight forward as just adding deps to maven and calling it a day, since you may not be aware of all code interactions, depending on the system you're upgrading.

Your program might even have a dep on some jar that was compiled under jdk4 and the author and source are nowhere to be found (or went out of business a decade ago)... and suddenly it breaks under java9. Things like that are largely what prevented mass adoption of jdk9 immediately.


Simple Object Access Protocol...

Simple: by the end I was dealing with self-signed bodies and validation, version hell, framework hell, and namespace super-hell.

Object: um, not really. It was request/response. Nothing really "OOP" about it at all.

Access: didn't really help much with access, that was all HTTP

Protocol: There were so many protocols and frameworks attached to those protocols and versions of the protocols that ... in the end of the day, it had no protocol.


The best thing about SOAP is that it drove industry-wide REST adoption. That had its own problems, but at least they were largely due to a failure of implementers to understand what using HTTP as an application rather than a transport protocol meant. And now we have GraphQL, which while it has its own faults[1] is far less unnecessarily complicated than SOAP and provides considerably more value.

[1] Oh crap I’d forgotten about SOAP faults until I wrote that word. Please help me I’m having traumatic flashbacks.


Can you give any concrete examples of what broke?


A lot of bcel/aspect code had to be rewritten. I've had to patch a couple transitive dependencies to bring a 2018 platform from Java 1.8 into Java 12 land, and it's stuck there forever since after 12 something else it depended on was removed. We're migrating to a simpler, saner runtime, but still, stuff takes time.


> A lot of bcel/aspect code had to be rewritten.

But why? What changed in the spec that forced a rewrite?


Typical reason for this is that JVM changed bytecode parsing in a backwards-incompatible way (yes this happens frequently e.g. to fix validation bugs) and the fix for this was then only rolled into a new version of the bytecode manipulation library, but that in turn had its own set of API changes, regressions and bugfixes, meaning that now whatever code sits on top has to be changed, etc.

Also most of the breaking changes from Java 8-11 are/were not spec changes. The spec leaves out many aspects of the Java platform that real apps rely on.

This idea that only apps that used JVM internals broke is totally wrong. I think the guys who work on Java think this because they don't actually use or work on any Java apps themselves beyond the ones that are a part of the JDK itself.


Thanks. Have you got any example issues that are public on GitHub or something? I can't imagine changes in byte-code parsing for example that you'd hit if you were following the spec from the start.


e.g. the validation of invokeinterface changed at some point in ways that broke common tools, if I recall correctly. I don't have bug links to hand right now. The JVM previously accepted something that it was later changed to reject. This often happens including in cases where it doesn't matter. See the Java 17 release notes about the change to validation of agentmain entry points for an example of the sort of thing that is done constantly, even where it isn't actually required by any other change.

People don't execute specs, they execute implementations. In the end whether something is or is not fully compliant with the spec doesn't change the costs involved in fixing it.


Exactly, it's not always easy to replace a transitive dependency and we're dismissing the platform anyway


They removed webstart - which is fundamental to how the apps we use are distributed. I believe that reason alone is why the distributor has stuck to Java 8/OpenWebStart.


Where are they going to? I've been out of the Java loop for a few years, maybe they landed where I did.


The big issue is around four things:

1) Certain libraries that were part of the JDK being moved out of the JDK, which usually required adding them as a module or dependency in $BUILD_TOOL_INSTRUCTIONS_FILE

2) Internal JVM APIs that weren't public being used via reflection by clever libraries, and then when they change, those libraries break

3) Bytecode emitting libraries - various frameworks love these, but bytecode that worked for Java 8 can fail on Java 9. Hence Spring 4 only supports JDKs 6 - 8. So to move to JDK 9+, you had to upgrade Spring to 5.x, and things that were tied to your version of Spring... and this process can very often suck.

4) New versions of library X only being available in class formats that Java 8 can't run. I encountered this with Jetty - version 10+ only support Java 11+, so if you're stuck on Java 8, you're limited to bug fix releases on the last version that supported Java 8.

Since Java 9, the JVM has been warning if you use internal APIs. As of Java 17, they have started enforcing strong encapsulation of those APIs ("private means private yo"), to give the JDK freedom to evolve without worrying about every library that got clever with the internals of the JVM. But given the very long lead time and ample messaging on this, I don't expect Java 17 breaking too many things.

At my last job, we had a rather large monolithic build that was tied to Java 8 because some modules would die a horrible death when compiled with or for a higher version Java. So I introduced Maven and Gradle toolchains[1] so that individual modules could compile using their preferred Java version, which freed up new modules/apps to use Java >8 as they saw fit. All the legacy apps that broke on Java 9+ could stay on 8, but the rest of the project was freed from their legacy crap.

[1]: https://docs.gradle.org/current/userguide/toolchains.html


It's common to require a certain old Java version or newer. In general Java is extremely backward compatible.

There have been a few minor breaking changes from JDK 8 to JDK 17, but the worst ones have had command-line options to disable them.

Maybe you're referring to JEE and/or specific "enterprise" software which does tend to move much more slowly.


I’ve run into several performance regressions from 8 -> 11 -> beyond

Code still worked but services would easily get overloaded, there would be one piece of code which ran crazy slow, or new memory issues would come up.

It absolutely is not just safe to jump versions in production, usually it’s been weeks of testing finding and fixing perf bugs.


Yes, what you describe is entirely believable. And one of the legitimate reasons why enterprise software is slow to upgrade.


This really hasn't been consistent with my experience as an end user. I have POS software that will blow up if any POS terminal applies a minor point release update on any terminal. I have current PMS software from Oracle that will infinitely NullPointerException if upgrade to a currently supported version of Oracle's JRE. It's going back a few years but I remember backups being unreadable if you upgraded JRE on a Backup Exec server.


That generally holds when you stick to public APIs. But there are many libraries that used reflection to access unstable internals. And that broke with modules.


No, it was more than that. Java has broken a ton of backwards compatibility in recent years. They stopped shipping a lot of stuff (like JavaFX), JavaEE stuff was removed from the JDK and then re-namespaced, they changed the formatting of the version number, they removed Web Start, they moved some stuff out of Unsafe, they changed how you access class files inside the JDK and so on.

The idea that the only thing that changed was access to internals isn't really the case, but a lot of these changes were "allowed" because Java cares about compatibility with its own spec rather than specific popular apps.


> They stopped shipping a lot of stuff (like JavaFX),

I thought they just split up the JDK into modules, making some libraries optional for a smaller footprint if you decided to ship your program with the JDK included. I am quite sure swing was also split of and all my toy programs still run despite that.

> JavaEE stuff was removed from the JDK and then re-namespaced,

JavaEE was never part of the standard JDK. Never used it, but you can probably still find the old java package somewhere.

> they moved some stuff out of Unsafe

out of sun.misc.Unsafe, an internal API of the Sun JDK that wasn't in the java namespace, wasn't documented and had nearly every IDE scream at you.


> I am quite sure swing was also split of and all my toy programs still run despite that.

Swing is still distributed with the JDK, but JavaFX isn't. If you use JavaFX, you need to either add Maven dependencies on the JavaFX modules or compile with a JDK distribution which still includes JavaFX (e.g. jdk-fx from Azul: https://www.azul.com/downloads/?package=jdk-fx).


At the byte code level, maybe - but any language as large as Java is (over the course of 20+ years) going to have changes to its libraries, including deprecations / removals.


Tried running a Java applet recently? ;P


Oracle itself still sells and supports software that runs client-side as Java applets in IE (the "server" that comes with it is an old HP desktop).

If you ever peek at the software some industries are running, you may find yourself extremely surprised. A lot of it is incredibly specific, and truly terrible.


The answer to this question is almost certainly "no".

It is ironic that what was originally the most hyped feature of Java has now been removed. But nobody is going to miss it.


I worked on one this year. You need Java 8 for it and an insanely old browser.

They didn't mention the applet part in the job description. That was my shortest tenure ever, including Summer jobs and internships...


Some applets could be launched with javaws, but it would require some tinkering to extract proper jnlp file from HTML or JavaScript. I managed to do so with HP iLO remote KVM app.


javaws was also removed =)


Need to use old Java, yep. Not as painful as using old browser.


Ugh. Sounds horrible.


>> Tried running a Java applet recently? ;P

> The answer to this question is almost certainly "no".

For me, it's a yes. My employer uses some ancient monitoring application called SiteScope. They recently "upgraded" from an ancient version to a newer one (I think the latest), and nearly the whole frontend appears to be implemented as a Java Applet. Since no browser on the Mac supports Applets anymore, their workaround was to download some Java App that ran the UI.

The older version we had used HTML.


I love the concept that the solution is to work around the applet issue and not find some new monitoring software.

I wonder how effective ancient monitoring software is.


> I love the concept that the solution is to work around the applet issue and not find some new monitoring software.

The workaround is actually part of the software itself (there are instructions right below the login box).

My understanding is they're only upgrading because the old version didn't support modern versions of TLS, and there's a big push to get off of those. Finding new monitoring software would be probably be more work. At a minimum, the new version probably supports all the features we're using, and at best all our configuration can be automatically migrated. I don't know the details: I'm not part of the project, I just have an some apps with some monitors setup for them.

> I wonder how effective ancient monitoring software is.

It gets the job done.


Well it's the only way I can manage our Brocade switches so.. yes ;(


I know it does happen, but I wouldn't say it's common. Generally speaking, most of the software I use runs without issues on newer JVMs, despite being built for an old one.


Java 8 was out 7 years ago ... That's like century in the software world


Oracle moved away from their "free" license to requiring a $100 or $200 per seat license for home users (and I think even more for business users), so the clock stopped ticking at Java 8u202 for a lot of people who refused to (or couldn't) move to OpenJDK.

edit: although this appears to have recently changed! https://blogs.oracle.com/java/post/free-java-license

It was really a terrible idea from a security perspective - and unfortunately there are organizations that won't consider software without a support contract.


Taht never worked. I had to use a goverment java web applet that run on only one exact java version, down to the patch level. Anything else, it didn't work. And also that java version was already out of date, so every browser was crying about it. fsck java.


All desktop software I knew around 2009 already used a bundled JRE.


Write once test everywhere was the problem. If you wanted a fully tested app, you have to bundle a JRE, and if you are bundling a JRE then backward compatibility is not as important.


Im more skeptical. one of the reasons we're still on jre8 is any of the newer jres I tested throw crash panics on several tasks and the ones that dont take 40 or 50 hours to finish instead of instead of 4 or 5. (mostly changes in the garbage collection as far as i could tell, several command line options we rely on for performance were depreciated with no way to turn them back on)

That left me with the impression that stability and performance had been abandoned over throwing everything including the kitchen sink at it to try and attract people who like rust and go.

Could be wrong, but the hand waving away the implication that 17 is slower than 15 at the end and not even trying to compare with 8 which is rock solid and crazy performant leaves me thinking nothing actually changed.


Here is a blog that compares garbage collection across several versions and GC's: https://jet-start.sh/blog/2020/06/09/jdk-gc-benchmarks-part1

They concluded:

"JDK 8 is an antiquated runtime. The default Parallel collector enters huge Full GC pauses and the G1, although having less frequent Full GCs, is stuck in an old version that uses just one thread to perform it, resulting in even longer pauses. Even on a moderate heap of 12 GB, the pauses were exceeding 20 seconds for Parallel and a full minute for G1. The ConcurrentMarkSweep collector is strictly worse than G1 in all scenarios, and its failure mode are multi-minute Full GC pauses."


I forget, bit iirc it uses one by default, but you can specify the number of threads to use with XX:ParallelGCThreads=<N> which is what got depreciated later.

its target is throughput rather than response so yep "pauses", but the entire job takes 4 hours instead of 40. In production those pauses vanish with load balancing.

checking for jdk8 vs jdk17 turns up

https://www.google.com/amp/s/www.infoworld.com/article/36068...

"New Relic found that nearly 100 percent of users are running either JDK 11 or JDK 8, the two most recent LTS releases."

so 17 being 8% faster than 11 apart from when its 10% slower, "maybe" slower or faster than 15 but they chose not to do a like for like comparison.... but 11 being catastrophically lower throughput than 8....

I may get some time soon to do some benching, I dont want to be back on 8, but the benching I did for at least 10 and 11 just made them a no go, completely crashing the jvm was an even bigger issue than the time.


Are you sure ParallelGCThreads was deprecated? The cms GC algorithm was deprecated but that doesn’t use that setting, which is used by the ParallelGC algorithm. The default was changed to the g1gc algorithm. But if you were manually specifying parallelgc that shouldn’t have been impacted.


>Are you sure ParallelGCThreads was deprecated?

Definitely not sure it was that op exactly, reasonably sure it was something related to it (maybe it no longer working with a gc it used to).

TLDR:I found out "the hard way" -XX means "jvm specific, can change meaning at any time and is jvm vendor specific".

oracle java 8 had a sweet spot that made perf awesome, nothing since has come close.


G1 uses a lot more memory than CMS so we can't upgrade that. Is deprecated on 11 but we still use it.


The magnitude of the change should tell you that you're hitting a weird edge case. Obviously new versions of the jre are not going to be 10x slower in general.


Must be something unique in your workload. For my team, later versions of Java have been stable and a bit faster than older versions.


jdk 8 is not rock solid nor performant.

You should look at your codebase, you might be doing some strange things that changed in newer JDKs (e. g. String split no longer uses the sour String, but I don't remember when they changed that). Some options were deprecated because they are also always on (not all probably).


3 years uninterupted uptime and counting.

what in 17 do you think is worth risking downtime?


> 3 years uninterupted uptime and counting.

Didn't you had to install updates and patches during the past three years? Or were those sacrificed to keep an uptime streak that matters nothing?


Sure, but no other updates or patches require massive rewrites to work as well as it does now, and they can be applied one machine at a time.

e.g. a non insignificant portion relies on corba/rmi. it works, and works fine.

switching, e.g. https://stackoverflow.com/questions/51710274/is-there-a-repl...

might break in unexpected ways, bringing down everything that needs it.

I see 17 is potentially ripping out security manager, why? wth?

So suddenly jars that expect to throw an exception if 3pd code wants to do something they arent allowed to will now just run fine?

Really really very sad face.


> I see 17 is potentially ripping out security manager, why? wth?

Because Security Manager — which enforces runtime access capabilities, like an OS kernel does — exists for the sake of Java applets, and Java applets have been dead for a long time. The Security Manager enables secure execution of "mobile" (i.e. untrusted, not-auditable-at-time-of-release) code — cases where the "host" and "guest" codebases are compiled separately with the "host" having no ability to introspect the "guest" during the "host's" compilation, because the "guest" doesn't exist yet.

Security Manager does not exist for the sake of fighting against your own project's dependencies. There are much simpler and more runtime-efficient solutions (like linters / static-analysis tools) for the case where the "host" can see the "guest's" code at compile time.

Security Manager doesn't exist for the sake of "plugins" (e.g. WARs, Java Agents, etc), either. Even if you don't have a plugin's source code available in your project worktree at build time, you can still validate the plugin's behavior at release time by statically analyzing its bytecode before building it into your release fatJar. Which still allows you to make much stronger guarantees than Security Manager can.


no, not just applets. its used for any kind of secure system where developers and their components are forbidden from say network access, e.g. some nashorn map reduce javascript. java _used_ to be the language of choice for building complex systems with large multinational multidisciplinary teams. security manager was a vital part to those projects, removing security manager means none of that can migrate.

why do it?


Because the modern Enterprise-y best-practice is to not allow multiple security contexts to exist within a single monolithic OS process where the OS kernel's own "security manager" can't get to them to enforce its own policy; but rather to cleave apart your process along security-context lines into multiple processes; containerize those to isolate them from each-other; poke precise holes in those containers for well-defined RPC channels to flow; and connect those channels using a service mesh with secure application-layer firewalling.

No modern ops staff would trust the JVM to not be exploitable if you're exposing it as an attack surface to arbitrary user-supplied (or "third party dev who is incentivized to make a network call you don't want them to make") bytecode. Creating that kind of boundary is what technologies like gVisor and Istio are for; it's not the job of a programming-language runtime. (See also: the decline and fall of Google App Engine's "secure runtime" standard environment, in favor of the gVisor-based flexible environment.)

(Well, okay, Node and Lua are both runtimes that have trustworthy VM-level isolation that can be used to achieve this — see Cloudflare Workers for the Node case, or Redis scripting for the Lua case. But this is because Node and Lua both have "shared-nothing" runtime execution semantics with no concept of static-bound identifiers [i.e. constants] — meaning that all the fundamental "stuff" available in a Node or Lua execution context is put there dynamically, and so a sandbox can be constructed "from scratch" with arbitrary stuff [mocks/stubs/proxies] inserted in place of the fundamental module handles, to create a whitelisted environment. The JVM is not built like this; the JVM's Security Manager merely blacklists certain known-dangerous operations, and so does nothing to prevent exploitation of zero-day vulnerabilities.)


> Node and Lua are both runtimes that have trustworthy VM-level isolation that can be used to achieve this — see Cloudflare Workers for the Node case

Correction: Cloudflare Workers does not use Node, explicitly because Node does not provide secure isolation. Workers uses a custom runtime built directly on V8, which does provide secure isolation in the way you describe.

As for Lua, it might work in theory, but I don't think it has had anywhere near the scrutiny V8 has had. I wouldn't bet on it when security really matters.


Thanks for that detail.

Re: Lua, I don't know about "scrutiny" in the code-audit sense, but I believe that there are a few battle-tested deployments of Lua in entirely-untrusted sandboxing use-cases. Some games (e.g. Minecraft's Computercraft mod) allow players to deploy arbitrary Lua scripts which are then executed on other arbitrary players' computers; and yet nobody's managed to nuke anyone else's hard drive through these scripting mechanisms yet, despite griefers having high motivation to try.

For the same reason that blockchain software continuing to exist makes me have a lot more faith in the collision-resistance of SHA256 (despite a lack of cryptanalytic proof of such), I'd trust a tightly-constrained Lua sandbox. (In both cases, of course, there might be a state-military-level adversary with a vulnerability in hand, who doesn't want to show their hand for something as trifling as money. But such an attacker is highly unlikely to deploy their exploit against my service in particular, even if they have it.)


> Because the modern Enterprise-y best-practice is to not allow multiple security contexts to exist within a single monolithic OS process where the OS kernel's own "security manager" can't get to them to enforce its own policy; but rather to cleave apart your process along security-context lines into multiple processes; containerize those to isolate them from each-other; poke precise holes in those containers for well-defined RPC channels to flow; and connect those channels using a service mesh with secure application-layer firewalling.

It always was? None of that solves the problem of blacklisting certain actions or logging for audit based on the content and/or origin of the data being processed.

Ignoring such things does explain why and how modern enterprise-y solutions are leaking like sieves tho I guess, e.g. solarwinds would likely never have happened if they implemented proper application component level permissions.


None of that explains why java breaking tried and tested existing solutions is a good idea for anyone.


Because nobody is maintaining it, because nobody is using it in big modern projects; and a blacklist-based solution like the Security Manager needs constant attention and maintenance as the JVM's stdlib grows, to avoid something slipping through. Every time someone adds something that needs to be secured to the JVM, there's the extra overhead of coming up with a (not necessarily easy!) way of instrumenting it for the Security Manager to control.

Given a fixed pool of developer resources, eliminating unused per-change overheads like this, serves as a multiplier for the project's ability to do everything else. The time freed from core devs by not writing Security Manager instrumentation for everything, could be instead allocated to e.g. getting Project Loom stable and upstreamed.

And that's assuming that everyone even does the Security Manager instrumentation of each new thing perfectly. The devs doing that instrumentation don't use Security Manager any more than a random Java dev does, so they likely don't know its ins and outs. Every new patch is thus a new opportunity for someone to bungle the Security Manager instrumentation, and thus for exploitable JVM surface area to be created. The more time that goes by, the more likely such exploitable surface area is to have already been introduced and gone entirely unnoticed by a population of developers that just doesn't use the feature. And so, the more time/patches that go by, the more that the Security Manager shifts from being a harmless thing to leave around, to a land-mine waiting to strike any dev who does try to use it.

(The "blacklist where at any point a new feature could be added and its security implications missed" problem is the same reason it's hard to trust SELinux—but at least SELinux is used by default for securing system components in some Linux distros, so at least SELinux policies gets pulled in on a regular basis as part of other companies' security audits of their stacks. Security Manager isn't used by default anywhere!)


nashorn is no more in JDK


> I see 17 is potentially ripping out security manager, why? wth?

Because it is not used by any modern (post Java 6 probably) feature in Java?

So why keep such thing? If libraries want to use such features they can create their own.


People still take outages for patching? I think the parent means service uptime.


Taking a node down does not mean outage at all. Service uptime also reads like a non-sequitur given the comment on the JDK11 crashes sounded like OP was doing something weird with the new JDKs, not with the old prod system running JDK8.

JDK11 was released way back in 2018. Any claim that it's unstable these days is not credible.


Why would there be downtime? It's faster. Less RAM hungry. Tons of new great features.

I mean, if you don't want or feel the need to upgrade then don't. There are plenty of reasons to do so IMO.


Because the core system dates back to jre... 1.4, has had ~2 decades of painful testing and big chunks of code dont currently work on anything newer than 8.

a bug that takes 100 hours to manifest takes multiples of that to fix.

compute and ram is crazy cheap these days, developers are not, and finding good ones is harder than ever.


Yeah I'm gonna say you're an edge case and that the majority of users will see performance benefits. Maybe not you, but that's legacy code for ya and an awful lot of technical debt that's likely not worth it.


every java dev was an edge case, from small houses building blockbusters like triage through to the likes of gazprom and google building their entire backend on it with tens of thousands of devs.

since you threw them out, which "majority of users" are you thinking of?


Nah, I'm just saying you have a lot of legacy code you're either incapable of fixing or not willing to fix. And no, not every java developer is an edge case. If that were true then no one would be able to upgrade and they wouldn't bother creating more Java releases, we'd still all be on 8. See I too can make sweeping assumptions with no basis in reality. Sorry your codebase sucks, good luck!


I know you have a lot of freshly written code that will crash multi times a week for the first few years its used in the wild.

If you are wondering why no one wants to pay for it, that is why.


LOL, you know nothing about what I do. And it sounds like you have experience with "freshly written code that crashes multiple times a week for the first few years"... here's a hint, it's probably you and your development team to blame, not the JDK.

Maybe you should quit being lazy, learn the new JDK APIs and features, and convert that POS legacy codebase into something that's not dependent upon a platform released 7 years ago. But then again, maybe you're not a competent enough developer to do so, so you blame the JDK developers instead.


heres a real hint. Im in a yacht in the med, will fly my private helicopter home soon. I also dont want to know anything about you. Well done.


Sure you are buddy, keep dreaming. One day you'll learn enough Java to upgrade your POS legacy codebase so you're not on-call 24x7.


I really didnt need to know you live of the wrong side of the 24x7 oncall employment curve.

But now that I do, i still have no interest in anything you think you know, you sound like the kind of dev I could replace with a 14 year old on $2 an hour.


Hah! Good luck dude, I feel sorry for you and your POS app that you can't upgrade. Stuck on JDK8 for the rest of your life while the rest of us have moved on. Must suck doing legacy support for such an ancient app. And by the sounds of it, you must have been hiring those 14 year olds @ $2/hour. Kinda explains why you're in the mess you're in, hahahahaha!!!


Meanwhile, since you seem so interested, in the real world, even applications that run on windows 7 still get upgraded.

Luckily I didnt, say, not do any java since jdk8 was released and instead waste my time on worthless, soul destroying bash scripts.

If I did that, rather than linking you a 30 second youtube vid of my check ride last month

https://youtu.be/K4F7Ll9PGfI

I'd be the worthless POS instead of you.

I dont feel sorry for you, you mostly make your own luck, and a quick check of your comment history suggests we both have the life we deserve.


I'm doing good over here, thanks for checking my comment history! Good luck to you dude, sounds like you need it.


What will you do with this system when Java 8 stop being supported (no more security fixes)?


thats actually less of an issue than it sounds for a jvm build that hasn't had a relevant and important security issue found for over a decade, is now open source and all the others found and fixed any time recently give better assurance over any recent code that hasnt had the same scrutiny.


Nothing for you, but something for us.


The current JDKs are at least as stable and, for most workloads, significantly more performant and less RAM-hungry than 8. JDK 8 is medieval technology compared to the modern JDK. The savings in hardware alone pay for the cost of migration within less than a year.


Java 8 is medieval. Java 17 is at least Enlightenment-era.

At a prior employer, we ran a heavy-duty compute engine and a bunch of other services on Java 13 (don’t ask) without any issues. Stupid JDK tricks in dependencies were the hard part of migrating 8 to 13.


Does this include the cost of migration? In my experience it is not as easy to move companies from LTS JDK to newer ones. There are external dependencies sometimes too.


Yes i did see the "less ram hungry"

thought that may actually be part of the issue - not keeping things that benefited from that persistence around long enough. Traded throughput away for smaller lookup tables.

As for "medieval technology" in what way? If the jvm cant run a day without creating a hcrash report and you need 10 times as many cpus to complete a job in the same time, how is that a step forward?


> Traded throughput away for smaller lookup tables.

Much of the improvement came from a re-encoding of String (no longer internally encoded as UTF-16), and more sophisticated GC algorithms.

> If the jvm cant run a day without creating a hcrash report and you need 10 times as many cpus to complete a job in the same time, how is that a step forward?

If you see crashes, submit a bug report, but we've not been seeing any increase in crash reports. It's possible that you're using a library that hacks into JDK internals and is a cause of such problems. Try using JDK 16 or 17, where strong encapsulation is turned on (at least by default).


Those crashes were just on our unit tests, creating linking and destroying large volumes of objects, throwing them in a thread for some basic compute then writing results to disk. I agree its really about the gc, but the new ones just never seemed to beat the setup we have on 8, and there is only so much time that can be devoted to making things that work, work on "new shiney".

java after 8 is just mayhem, before it was easy, oracle for some things, ibm for others.

now there are new long term releases faster than we can test them, they fail at least one of the unit tests that all pass on 8, and openjdk is missing half the really nice stuff that was once free.

>submit a bug report but to who and under what title? Oracle, IBM, openjdk?

e.g. "none of the corba stuff works anymore because its not part of 'new java'" is whose fault?


If you see crashes -- report them; they're either bugs in the JDK or possibly the result of a library you're using that employs native code or Unsafe in a way that depends on a JDK 8 implementation detail. Overall, there has not been any increase in reported crashes since 8.

> but to who and under what title? Oracle, IBM, openjdk?

The most popular Java implementation by far -- the one that serves as the basis for almost all distributions except IBM's -- is called OpenJDK, and is lead by Oracle, with contributions from Red Hat, Intel, Google, Amazon and others. You report the bug to whomever you get your JDK from, and they report it to the OpenJDK project.

> e.g. "none of the corba stuff works anymore because its not part of 'new java'" is whose fault?

CORBA works fine, it's just not distributed as part of the JDK anymore. Like most decisions around OpenJDK and the Java spec, it was made by Oracle-employed engineers (like myself), and signed off by representatives of other companies. The reasoning is that it would do more good for more people if the core team behind OpenJDK would spend their efforts on things other than maintaining CORBA, and the reaction from the community has been positive.


>they're either bugs in the JDK

here is the documentation I found describing what i would see in anything on post jdk8 that would compile:

https://docs.oracle.com/en/java/javase/11/troubleshoot/fatal...

How to get from that to a (useful) bug report of what broke after 25 hours uptime in the newer jvm, tried most I found, openjdk, oracle, IBM before running out of time.

>CORBA works fine

Use almost everything removed in https://docs.oracle.com/en/java/javase/11/migrate/index.html

Which reminds me that the migration cost estimate came in at 7 figures for no clear benefits. Decision after that was to keep running on what we have working and watch for how ruby/rust/go etc develop.

We had actually _just_ started a javaFX project only a few weeks before finding out it was removed. That project was binned and we went over to android apps instead.

In fact looking at the schedule, almost all new projects are Android based now - all the old code (that still compiles) works fine there and google have amazing instrumentation for when consumer side stuff breaks.


> How to get from that to a (useful) bug report

Start by submitting the hs_err file.

> almost everything removed

It was removed from the JDK and made available as a separate module (contributed to Eclipse): https://github.com/eclipse-ee4j/orb

> Which reminds me that the migration cost estimate came in at 7 figures for no clear benefits.

Most companies report a ~10% reduction in hardware costs, and more than make up for any migration costs (which are quite low these days) in under a year. See this experience from LinkedIn: https://channel9.msdn.com/Events/Java/JDConf-2020/Java-11-Jo...

In other words, the benefits are usually significant savings, even if you don't use any new features.


You're assuming a heavily replicated horizontally scaled service. A lot of Java servers run on one or two servers. Hardware costs are immaterial compared to people/opportunity costs.

Fact is, Java now has a reputation for pointless churn that it once didn't have. For example kicking JavaFX out of the default JDK builds makes no sense outside the world of Oracle corp politics. It would be easy to keep bundling it and would reduce the churn but instead it's now needing new gradle/maven plugins and stuff. Ditto for all the other stuff that's being kicked out.

Also people won't use incubator modules so realistically new Java's have very few new features that can justify the painful upgrade process. Some syntax sugar and better GCs. If you're CPU constrained better perf but a lot of apps aren't. They're feature constrained.


> Hardware costs are immaterial compared to people/opportunity costs.

First, the fact is that companies do care a lot about those costs, and why not have both? The early problems with the 8->9+ migrations were inevitable, as libraries became coupled to internals. There were two options: stop improving the platform, or get some short-term pain in exchange for a couple more decades at the top.

> Fact is, Java now has a reputation for pointless churn that it once didn't have.

It does have more churn than it once did, because that's what more people want. It's still significantly less churn than any other popular language/platform out there, with the exception of C.

> For example kicking JavaFX out of the default JDK builds makes no sense outside the world of Oracle corp politics. It would be easy to keep bundling it

I understand why it might appear to be easy to start bundling JavaFX with OpenJDK, but it's not. There is no one in the world more personally invested in the success of the platform than the people maintaining it, and we need to constantly choose where to put our resources. With an ecosystem so big, no matter what we do there are bound to be quite a few people who are disappointed with the budgeting, but the amazing resurgence we're seeing speaks for itself.

> Also people won't use incubator modules so realistically new Java's have very few new features that can justify the painful upgrade process.

You're entitled to that opinion, but in a few short months the majority of the ecosystem is expected to be on 11+.


JavaFX is still maintained by Oracle so I don't see how budgets are related, the JDK ships lots of modules that aren't a part of the Java SE spec, and adding it in is a simple matter of jlinking it. It would be very easy to start bundling it again.

I'm sure there's some explanation for separating it that makes sense if you're very close to things, but whatever the underlying rationale: that one move broke every JavaFX app simultaneously. JavaFX is the most modern GUI library, the one Sun/Oracle promoted for years as the replacement for Swing and yet new Java releases cannot run any old releases of any of those apps. That's what people mean when they say Java lacks backwards compatibility: their apps stop running.


> JavaFX is still maintained by Oracle

Yes and no. Yes, we still have some Oracle engineers contributing, but development is co-led with Gluon, and not at the same standard of contribution as other Oracle-led OpenJDK efforts. Second, the problem isn't the actual building of the package, but the lack of desire to coordinate releases. Same goes for the EE modules -- which orders of magnitude more people use than JavaFX.

> that one move broke every JavaFX app simultaneously

> Java lacks backwards compatibility: their apps stop running.

Yes, but given that Java's backward compatibility is significantly stronger than pretty much anyone else's (except maybe C), it's important to understand that its former, even stronger form, was not so much by design, but by unfortunate necessity -- investment in the platform was low, and so the rate of change. There is simply no way to let the complexity of the JDK grow monotonically, so things have to be removed or separated. If, say, 1 in every 100 projects has to make a code change (not just include external packages) only once every few years -- we're very happy with that.

We put a very high premium on backward compatibility, certainly compared to other platforms, but it's not absolute, as that would entail stopping the platform's evolution. Especially now with the disappearance of the JRE, the addition of jlink and the move toward embedded runtimes, aiming for 100% compatibility for 100% of users is not even necessary.


>You're entitled to that opinion, but in a few short months the majority of the ecosystem is expected to be on 11+.

Ooopf.

Only way I see that being possible is if 11+ can be adopted by the ecosystem without significant rewrites of code and build scripts.

Has that situation really improved enough since the nightmare 8 to 11 migration requirements locked in most of the ecosystem on 8 for production builds and where can we read about it?


> Only way I see that being possible is if 11+ can be adopted by the ecosystem without significant rewrites of code and build scripts.

It can and it has. The only code that had to be modified is code that hacks into JDK internals, as the spec is virtually backward-compatible (the incompatible spec changes are so small as to be insignificant, except possibly the removal of some modules from the JDK that have to be changed into external dependencies, but that's a pretty simple change).

> Has that situation really improved enough since the nightmare 8 to 11 migration requirements locked in most of the ecosystem on 8 for production builds and where can we read about it?

Absolutely, because libraries that hack into JDK internals have already fixed their issues. The projection is still that by the end of this year more than half of the ecosystem will be on 11+.


>Start by submitting the hs_err file.

Having just read the release notes: https://jdk.java.net/17/release-notes

links

https://bugs.openjdk.java.net/browse/JDK-8263710

Similar enough code block in that to what I just described to see there is now/still a known issue in java 17 to kill services after a prolonged period of time.

There's no shortage of those hs_err files, deleted a few thousand a while ago when I found out they were the reason a staging server had run out of disk space.


> Similar enough code block in that to what I just described to see there is now/still a known issue in java 17 to kill services after a prolonged period of time.

That's a crash going back to 8, which 17 is actually reported to reduce (i.e. 17 crashes less than 8 under those circumstances). But, as you can see, when such crashes are reported, they are investigated in depth.


Its one example of issues that never should have made it into a release in the first place of any software that cared about stability.

That even attempting to fix got deferred to jre18.

From where I, and Im sure most of "the ecosystem" sits "the jvm eventually explodes if you indefinitely add things to and clear an arraylist" is more than reason enough to not update to any jvm with that "feature", 10% hardware savings wouldnt cover a fraction of the support costs there even for those spending significant amounts on hardware.

OTOH, it is quite nice that new adopters will by design never be able to achieve the same stability those of us who have been around the block a few times have.


> Its one example of issues that never should have made it into a release in the first place of any software that cared about stability.

You realise it's been there for many years before being discovered by one of the millions of Java developers out there, yeah? It goes at least as far back as JDK 8.

> From where I, and Im sure most of "the ecosystem" sits "the jvm eventually explodes if you indefinitely add things to and clear an arraylist" is more than reason enough to not update to any jvm with that "feature",

Once again, that bug -- for that particular class of processors -- is in 8 as well, and the crash occurs less often in 17 than it does in 8.

> OTOH, it is quite nice that new adopters will by design never be able to achieve the same stability those of us who have been around the block a few times have.

Those of us who have been around the block a few times have learned to trust data; if there's any trend in the JDK's stability, it's upward. Our process is so much more careful and the testing more exhaustive than it's been in the 8 time-frame.


>You realise it's been there for many years before being discovered by one of the millions of Java developers out there, yeah? It goes at least as far back as JDK 8.

Obviously, since it was me that just spent the last few days explaining stability and migration options went downhill so fast after 8 we got stuck there for production systems.

Do you?

>Those of us who have been around the block a few times have learned to trust data; if there's any trend in the JDK's stability, it's upward. Our process is so much more careful and the testing more exhaustive than it's been in the 8 time-frame.

But adding and removing objects to/from an arraylist was somehow missed from that exhaustive testing? Or really just not considered an important feature for java to worry about?

Better to spend the dev time deleting security manager, because no one in the ecosystem wants or needs to manage application security from java code?

All that really just sounds like intentional sabotage to me.


> went downhill so fast after 8

This isn't after 8. The bug is there in 8. And your statement that things went "downhill" after 8 is just factually incorrect, which you can confirm based on publicly available data.

> But adding and removing objects to/from an arraylist was somehow missed from that exhaustive testing?

Given that the bug has been in the JDK for at least eight years, and it wasn't detected in production in all those years until now, why is it surprising that it wasn't detected in testing? It occurs only on specific architectures and specific conditions.

> Better to spend the dev time deleting security manager, because no one in the ecosystem wants or needs to manage application security from java code?

You're implying that SecurityManager is used for security, while it hasn't been a security feature in many years (including in JDK 8). It was a security feature for applets, but virtually all features added in JDK 8 and beyond ignore SM, and Java's security features have not focused on SecurityManager, it is not part of the platform's core security, and Java's secure coding guidelines do not recommend using it. We're removing SecurityManager on the advice of security experts, in an effort to improve security. SM does not only not improve security, but might well harm it.

We focus on stability today much more than we did in the 8 timeframe. We have more tests these days that we run more often (thank's to a better CI system), and we merge new features after much more scrutiny and testing than we did in the 8 timeframe. I don't know if you remember, but 8 was largely unusable until the first update. This no longer happens today. New versions are stable on day 1.

I'm sorry you've had a bad experience, and in an ecosystem so large running on so many kinds of machines, it might well happen to some, and, understandably, it paints your views. But looking at the ecosystem overall, there is simply no downward trend in stability, and while we haven't yet crunched all the data, we can tentatively say there has been an upward trend.


>This isn't after 8.

Reasonably certain its not before 8 :)

I do remember those times tho. When bugs like the jvm exploding doing basic things like adding and removing items from an arraylist for a prolonged period of time would be fixed in the next point release before anyone really realised - rather than left to people who are too busy fixing their own bugs to notice and replicate, resulting in such bugs hanging round for 2 lts releases and a decade later.

>We focus on stability today much more than we did in the 8 timeframe.

Couple of thoughts on this. Are those tests really capturing the "jvm explodes after 100 hours" issues, like in that bug report. Or are they just "no new segfaults in this short lived jvm run in a VM that just completely changed the gc again and rewrote a big chunk of the thread handling code", I dont have faith they do simply from burning so much dev time previously to find they werent.

But also; it is really nice to see jdk development process stabilise after the terrifying licence changes we got on 8 - Im sailing right now, back in the office soon, however I dont see any adoptium builds for 17, any idea when I can actually check all this myself?


> Reasonably certain its not before 8 :)

Well, the bug dates at least as far back as 8.

> I do remember those times tho.

Since you're talking to a member of the team that maintained Java back then, I can tell you that our quality process back then looks like a joke compared to what we have now. Of course, you might have been bit by a problem, but our stability has 100% not gone down (backed by irrefutable data), and most likely has gone up.

> Couple of thoughts on this. Are those tests really capturing the "jvm explodes after 100 hours" issues, like in that bug report.

Once again, whatever we did 8 years ago, we're much more thorough today. We are not bug free, but we don't have more bugs than before, and we probably have fewer.

> But also; it is really nice to see jdk development process stabilise after the terrifying licence changes we got on 8

By "terrifying license change" you mean the 100% open-sourcing of the JDK, and making it 100% open and free for the first time in Java's history that happened in JDK 11? Some people were confused when we made Java entirely free and unencumbered, and made OracleJDK the name of our support subscription, so we've changed it a bit now again. But as of JDK 11 -- and unlike any Java version that came before it -- the JDK is completely open.

> however I dont see any adoptium builds for 17, any idea when I can actually check all this myself?

Adoptium is a project run by an IBM team, and I have no knowledge of their process. Unlike Oracle, Red Hat, SAP, Azul, Bellsoft, and Amazon, Adoptium is not involved in OpenJDK and is not a member of the vulnerability group, so their builds usually come after everyone else's because they're less familiar with the OpenJDK project and its process (they're more familiar with OpenJ9). But I'm sure they'll be able to make a build soon enough.


>Adoptium is not involved in OpenJDK

They replace adoptOpenJDK https://adoptopenjdk.net/ ...

>By "terrifying license change" you mean the 100% open-sourcing of the JDK, and making it 100% open and free

No, I mean

https://www.policypak.com/resources/pp-blog/oracle-java-lice...

>Once again, whatever we did 8 years ago

I didnt (particularily) place any blame. If it wasnt your testing that was catching them early it must have been someone elses. Presumably they left the ecosystem.

>the JDK is completely open.

But a shadow of its former self.

>but we don't have more bugs than before

Im really talking about issues related to very long running jvms. The kind of things that would have killed applications like apache tomcat dead.

In fact it looks like they use their own jvm build now, I'll give that a go at the same time..... however it looks they they are still really on 8 as well.


> They replace adoptOpenJDK https://adoptopenjdk.net/ ...

Yep, it's the same IBM team, who aren't involved in OpenJDK (and weren't when they were called AdoptOpenJDK). They mostly know OpenJ9.

> https://www.policypak.com/resources/pp-blog/oracle-java-lice...

Almost everything here is wrong. The prices and licensing terms are very wrong, but, most importantly, what Oracle did was, instead of provide one part-free, part-paid, JDK, we started providing two indentical ones with different licenses: one completely free, and one for support subscribers.

> But a shadow of its former self.

Actually, it has many more features today than it did back in 8.


Not sure if you'll catch this message any time soon, but:

1. major source of incompatibility was the removal of the ability to cast ClassLoader to a URLClassLoader, fixes for that were relatively extensive but also simple enough to get running in a few days.

2. Not seen any crash reports in the first 24 hours which is a good sign, any that do come up I will forward as bug reports.

3. Perf wise, about equal, memory usage is much better.

4. Thanks for the impetus to give migrating another go.


Good! You might want to try using the G1 GC, which is now the default.

> major source of incompatibility was the removal of the ability to cast ClassLoader to a URLClassLoader

Right. That was an unspecified behaviour, though, that just happened to be true for a long time so people depended on it.


So far so good just default parameters plus an 8gb heap.

>unspecified behaviour Also the accepted answer on 100+ different websites for how to choose single jars from a folder at runtime that still compiles but now excepts with no easy replacement or even explanation of how to replace it. If i get time i might write up how i fixed it.


> Also the accepted answer on 100+ different websites for how to choose single jars from a folder at runtime

Which goes to show that not all information sources are equally good. For a couple of decades, we've said over and over -- and still do -- that depending on unspecified behaviour is technical debt that will eventually break your codebase.

> with no easy replacement or even explanation of how to replace it

That's how online givers of bad advice roll -- they advise you do something they know is wrong, and when things go south they don't help you; those who followed Sun/Oracle's advice ran into no such issues. There are documented and supported ways to get the classpath.

(I won't lie, I used that easy trick, too, once or twice, but I did it knowing full well that it might well stop working some day, and treated it as technical debt.)


> We had actually _just_ started a javaFX project only a few weeks before finding out it was removed. That project was binned and we went over to android apps instead.

You were going to start a _new_ project on an old runtime? There’s no logic here.


javafx was released and pulled in between lts versions. introduced in 9 and removed in 11.

https://www.infoworld.com/article/3261066/javafx-will-be-rem...

11 is still the most recent LTS version, 17 doesnt even have a openjdk release yet

never migrated to 11 because nothing we have running perfect on 8 ever passed its unit tests on 11.

Even the 17 release notes basically seem to imply 17 is not ready for production.


But you’re making a _new_ project. Just make it on 11 and pull the openjfx jar into the build file.


It was a project started just before 11 was announced along with javafx's removal.

Being removed meant any install on clients machines would be problematic (possibly why it was removed)

So the project was reassessed, and everything that was going to use javafx was replaced with webkit/html5/css/js which was expected to be and turned out to be much easier to deploy and far far more portable.


this is super interesting. Honestly, I do think that your testcase is worth replicating.

I dont think anyone is advocating you tradeoff stability for shiny features, but i think everyone is existing in a state of disbelief that Java 17 would break on your usecase.

But if what you're saying is true... it super duper worth it for everyone to submit that test suite. Java 17 is also LTS ...so that effort would be worth it in the long term.


I really think they are "trading off stability for shiny features"

Oracle wanted money from google for using java to create google search and creating Android.

Didnt get any, and now they are trying to attract devs who are starting similar ground breaking ideas on go and rust with a similar disregard for robust failure states and long term code viability which is always what separated java from its competitors.

Why else depreciate all the features that made it perfect for huge teams building systems that never broke and adding "things rust, go and v8 did"?

as for the test suite, not touched java 17, looking at the "whats new" the main thing I can say for sure is "new" is all the security manager stuff that wraps code made by 3rd parties is going to be marked with depreciated, but no clear upgrade path.


look - the tradeoff here is large vs large. I detect a certain disdain for Android - however there is probably a larger dev pool right now who want that (versus you).

I dont disrespect your usecase, but any company would have to service its largest market first. And if the largest markets are all Android, Spring Boot, Kafka, Spark, etc... then that's where they will go.

I have been in your situation, but ultimately this is a Python 2 vs Python 3 situation. If you want to stay on Python 2..then you better be able to maintain it yourself.

Its just a larger pool of users on the other side. And this is the common developer on the street wanting to use Kotlin, Clojure, etc. This is the market, so here we are.


> now there are new long term releases faster than we can test them

Aren't they every 2-3 years?


They’re pretty well spaced out.


used to be.

java 6 (previously widely deployed jvm) was 2006 java 8 (current widely deployed jvm) was 2014 now, 7 years later, we are on java 17 and the most significant thing to change is to break most everything that worked in 2014, and promotional material dare not mention java 8...

In fact "widely deployed" doesnt cut it for java 6, pretty much every pc, windows linux and mac had a solid java 6 deployment.

java 17 has a lot to live up to.


now yes. fixing what isnt broken isnt high on the list of things to do.


> fixing what isnt broken isnt high on the list of things to do.

It's hardly reasonable to try to pin the blame of not wanting to maintain software as a stability issue with a newer LTS release of the JDK. It's perfectly fine if you make the call to stick with JDK8 forever, but that's a personal call.


> Considering tons of folks are still on Java 8, I would have liked to also see comparison between that and Java 17.

We did a comparison of Java 8 and Java 11. And that was a good jump too: https://www.optaplanner.org/blog/2019/01/17/HowMuchFasterIsJ...

Now, you can't just sum op those percentages (different code base), but that sum would be a starting estimate I think.


I legitimately didn't know there were newer versions than 8. I haven't been following the Java space very closely, so I kinda did a double-take when I saw "Java 17" lol.


Same here. I've been in Microsoft land for the past decade and doing some Android work in xamarin which is I believe still on Java 8.


Then you will be shocked to learn that Microsoft now builds and ships its own Build of OpenJDK.

https://microsoft.com/openjdk


We are still on Java 8 in many applications and there is no real reason for it. Very much looking forward to get our hands on some new GC implementations.

We will definitely move to 17 already next year.


Anyone still on Java 8 can just stay there. If they cared about whatever it is they would have upgraded years ago.


Complete Anathema to the upgrade-addicted architecture astronauts round here.

Nothing can ever be finished, there must be "upgrades"!


Our Spring Boot showcase application https://github.com/porscheinformatik/angular-spring-heroes starts ~ 10% faster on Java 17 compared to Java 11 (compiled with target 11).


Even more interesting is the memory usage (app started and opened index page in browser):

  - Java 8/11 uses ~ 550MiB
  - Java 17 uses ~ 260MiB
(all run with Docker and a memory limit of 1GiB)


What GC is used, and how much CPU time/quota is given to the container?


Default GC, and no quota - which means my full machine (i7-6700HQ)


Just tried a simple Quarkus app (https://github.com/derkoe/quarkus-htmx-todos) with the same results. Java 17 starts around 10% faster. Memory consumption is around 10% lower (after first request).


I hope spring moves to compile time di like quarkus and ajave inject instead of scanning the classpath at runtime.


You mean Spring native [1], i think it's not ready yet.

[1] https://docs.spring.io/spring-native/docs/current/reference/...


No I do not. I do not want native image. I want the dependency injection container created at compile time like Dagger2 [0] or Avaje Inject [1] or quarkus without native image.

[0] https://dagger.dev [1] https://avaje.io/inject/


You can use the Spring AOT plugin [1] without enabling the support of GraalVM native image

[1] https://docs.spring.io/spring-native/docs/current/reference/...


Ohhh I see. Thank you!


Isn't that what Micronaut does, from the creators of Spring? I never used the latter, but I really like working with the former.


Yes, Micronaut and Quarkus do that. Micronaut was created by a company (https://objectcomputing.com/)


Do you happen to have a comparison from Java 8 to 11?


Just tried it - there is no significant difference between Java 8 and 11! They both start around 6.5s (JVM running for 7.4s). Java 17 starts the app in approximately 5.9s (JVM running for 6.7s).


We did java 8 vs java 11 a few years ago: https://www.optaplanner.org/blog/2019/01/17/HowMuchFasterIsJ...

TLDR: using the same GC (*), it's 4% to 16% faster (depending on the GC).

But be warned that if you don't set a GC, Java 8 uses Parallel GC (high throughput, so faster) by default and Java 11 uses G1 GC (low latency, so less hickups) by default.


It looks like majority of this gains are related to memory management and garbage collector or am I missing something?

JDK 8 -> JDK 11 gains were huge thanks to String optimisations (e.g. GC could dedup strings and store one char as byte in many cases).


I'm not super surprised? That seems to be where most improvement happens across a variety of languages nowadays.

I recently listened to an interview with someone working on Intel's C++, and he indicated that, contrary to popular assumption, most of the speed boost you can get with it isn't from better instruction-level micro-optimization, it's from analyzing information flows and maximizing locality of reference.

Perhaps this is an inevitable result of that ever growing processor-memory performance gap. Nowadays, the cost of having to take a walk of shame out to the RAM chips is so great compared to everything else that I would assume that minimizing the number of times you have to do that is by far the lowest-hanging fruit.


Java's value objects (inline classes) will "maximize locality of reference". [1]

[1] https://openjdk.java.net/jeps/169


Or at least partially mitigate one of Java's biggest performance weaknesses.


That draft is approaching 10 years old now. Has there been any indication of progress on this front in the recent JDK versions?

The kotlin workaround hacks are clever but also nasty and having real VM support for this would be great. But it also is starting to seem like the type of issue that just won't ever be fixed, despite how hugely important it is for performance.


It takes time to make such a fundamental, but backwards compatible change for 10 million developers... Nobody wants Python 2 -> 3...


It's a new keyword & bytecode feature, so there's no backwards compatible risks to it. Nobody's existing code would be impacted as nobody's existing code is using user-defined value types in the byte code.

It looks like there were some experiments and prototyping out of Valhalla from 2014-2018 (including a publicly available build: https://jaxenter.com/java-value-type-163446.html ), but there doesn't seem to be any updates since then



> there doesn't seem to be any updates since then

Oracle (Brian Goetz in particular) regularly gives talks and interviews and publishes updates.

There was even a link on this site a couple weeks ago: https://news.ycombinator.com/item?id=28364500


They don't want to introduce the a third type (next to primitive types and reference types). They are unifying the type system so that after this change we will also get universal generics so that List<T> could be List<int>, List<Person> and also List<Person.val> (where Person.val is an inlined class without identity).


But introducing such a large scale change to the JVM has to be well thought out, especially when it will get to live for decades.


Anything more than couple percent just due to improved memory management is phenomenal improvement.


My knee-jerk reply when click on the topic was "depends on how much memory it demands."

Good to find that the assumption was correct, lol.


Taking this opportunity to promote the author of this article (Geoffrey de Smet) and the project he's been the lead of for some very long amount of time (Optaplanner):

If you've never heard of constraints programming -- it's really interesting (personal opinion).

It lets you programmatically state "rules" and "goals" of some system, and solvers will automatically attempt to find the best solution.

Optaplanner is one of the most advanced of such engines that I'm aware of:

  Under the hood, OptaPlanner combines sophisticated Artificial Intelligence optimization algorithms (such as Tabu Search, Simulated Annealing, Late Acceptance and other metaheuristics) with very efficient score calculation and other state-of-the-art constraint solving techniques.
But it's also really approachable since they re-wrote it in 2020 to accept regular code for rules, instead of the custom DSL language they used before (Drools).

https://www.optaplanner.org/blog/2020/04/07/ConstraintStream...

I got interested in this area some years ago, and reached out to Geoffrey personally. He took the time to reply to me and answer a few questions. Great person, he's brilliant and super passionate about the domain.

If you have any sort of practical usecase for this (scheduling, vehicle routing, cost optimization, etc) and this sounds at all interesting, highly recommend giving it a shot!

As an example of what you can do in a few lines with this, here's one from the docs:

  The following code snippet first groups all processes by the computer they run on, sums up all the power required by the processes on that computer using the ConstraintCollectors.sum(… ) collector, and finally penalizes every computer whose processes consume more power than is available.

    private Constraint requiredCpuPowerTotal(ConstraintFactory constraintFactory) {
        return constraintFactory.from(CloudProcess.class)
                .groupBy(CloudProcess::getComputer, sum(CloudProcess::getRequiredCpuPower))
                .filter((computer, requiredCpuPower) -> requiredCpuPower > computer.getCpuPower())
                .penalize("requiredCpuPowerTotal",
                        HardSoftScore.ONE_HARD,
                        (computer, requiredCpuPower) -> requiredCpuPower - computer.getCpuPower());
    }
---

As a bonus, it's natively integrated into Quarkus and the Quarkus ecosystem. Quarkus is the best thing since sliced bread if you're on the JVM and are writing API's.

https://docs.optaplanner.org/latest/optaplanner-docs/html_si...


I would also add that Constraint Programming has a small but very dedicated community, and the amount of innovation in the space is incredible given how niche it is. Some things I find amazing:

* Dozens of solvers with hundreds of search methodologies

* Intermediate Language specifications (FlatZinc) to allow solvers written in one language to interact with completely different languages via a common intermediate language.

* Entire catalogs of useful constraints https://sofdem.github.io/gccat/

* Solver Competitions https://www.minizinc.org/challenge.html

* Peer Reviewed journals https://www.springer.com/journal/10601


Geoffrey de Smet, Optaplanner, and Constraint Programming (CP) are all amazing indeed. If we're talking about underhyped tech that actually works wonders in practical business problems, I would also take advantage of the limelight to include the whole CP field (including other solvers like Gecode), SAT solvers (MiniSat, Lingeling, Glucose, Chaff, ...) and SMT software (MiniZinc, Z3, ...), and Operations Research / mixed-integer programming (MIP) in which unfortunately open source software lags behind the stat of the art (Gurobi, Fico XPress).


I wish, SMT-solvers, in particular, were easier to approach. I have a number of problems that are solved once-per-month, that I wouldn’t mind throwing a couple of weeks of compute at to get absolute tip-top results.

Posing SMT-problems though … that’s an art. There’s not enough resources for “problems at scale” for me to just dip my tie into…


Strong recommendation to take a look at MiniZinc. It’s a modeling language for combinatorial optimization problems, and is often quite easy to model problems in it. The cool thing is that one can then solve the problem using many different solver technologies.


Is there an intro you can recommend? I’ve never heard of CP, would love to read about it.


There are a couple of very good coursera courses on modeling with MiniZinc, the first one being https://www.coursera.org/learn/basic-modeling


Signed up. Much appreciated.


I recently got super interested in constraint problems after reading Classical Computer Science Problems in Java. They were really fascinating (along with all the problems in the book).


Thank you very much for the recommdation, looks like an amazing resource. Hopefully i'll be able to schedule time work through this. Would have loved to study CS but awesome/time consuming events in my life prevented it, which is totally fine.


The author also wrote the book for Python or Swift if you’re not into Java.


Author here... thanks really appreciate the shoutout! The Java version is the most recent and probably the best version if you know multiple of the languages and have to choose between them. The Python and Java versions cover all of the same problems. The Swift version (the oldest) is missing a couple. You can find out more about all three, including links to translations in 9 languages, here: https://classicproblems.com/


Thank you very much for writing it. Lucky me, i'm most proficient in Java even though i used all three. May do it in Kotlin, we'll see. You deserve way more than a shoutout, submitted it <edit> so Pythonistas and Swift developers may discover it too</edit>.


If you're interested in this topic, we're building a tool for building cluster manager components (like policy-based container schedulers) using constraint programming, where constraints are specified using SQL.

Paper: https://www.usenix.org/system/files/osdi20-suresh.pdf

Code: https://github.com/vmware/declarative-cluster-management/


Holy smokes, this is one of the coolest thing I've ever seen.

This lets you use data from standard, JDBC-supported SQL databases and write constraints as plain SQL-like queries?! This reminds me of an idea I had about embedding a solver API in Postgres using it's extension functionality and being able to make SQL "VIEWS" that represent solver results, like a delivery route or employee roster.

  CREATE VIEW todays_employee_schedule AS
    SELECT ... 
    CHECK row_satisfies_constraints(...)
If I'm understanding this properly, this seems like this could be applied to general-purpose problems as well -- not just the one you're targeting right?

        // Create an in-memory database and get a JOOQ connection to it
        final DSLContext conn = DSL.using("jdbc:h2:mem:");

        // A table representing some machines
        conn.execute("create table machines(id integer)");

        // A table representing tasks, that need to be assigned to machines by DCM.
        // To do so, create a variable column (prefixed by controllable__).
        conn.execute("create table tasks(task_id integer, controllable__worker_id integer, " +
                "foreign key (controllable__worker_id) references machines(id))");

        // Add four machines
        conn.execute("insert into machines values(1)");
        ...

        // Add two tasks
        conn.execute("insert into tasks values(1, null)");
        ...

        // Time to specify a constraint! Just for fun, let's assign tasks to machines such that the machine IDs sum up to 6.
        final String constraint = "create constraint example_constraint as " +
                "select * from tasks check sum(controllable__worker_id) = 6";

        // Create a DCM model using the database connection and the above constraint
        final Model model = Model.build(conn, List.of(constraint));

        // Solve and return the tasks table. The controllable__worker_id column will either be [1, 5] or [5, 1]
        final List<Integer> column = model.solve("TASKS").map(e -> e.get("CONTROLLABLE__WORKER_ID", Integer.class));


Thank you for the kind words!

Yes it could be applied to general problems as well. In fact, we used it once to plan a program committee meeting. The library (DCM) has no idea what problem setting its being used for, and has no semantic info about the problem other than the schema and constraints we tell it about.

That said, the current focus is on things like incremental computation, debuggability, automatically subsetting the relevant state from the DB, and scalability to really large cluster sizes (O(50K) nodes), which are more useful in the cluster management context than general-purpose constraint programming tasks.

Edit: Also worth mentioning is that your intuition is spot on. The earlier versions of the library went with a CREATE VIEW syntax as you wrote it out. Now that we have a customizable parser, we have since changed it to CREATE CONSTRAINT for clarity.


He’s also a fairly good manager of PRs in my experience. He’s sort of blunt which can be hard to deal with if you’re used to more Anglo pleasantries but you can tell he’s ultimately being helpful.


Thank you for the kind words gavinray :)

OptaPlanner is a team and community effort. Credit goes as much to each of them (see team page) as it does to me. And we're standing on the shoulders of giants (other open source projects).


Hi, cool to see you here. Currently working on a large scale (operating costs in the billions) optimization problem and I'm currently testing out Optaplanner along with some commercial solvers. I really like how the problems are modeled using annotated POJOs, as opposed to some of the more contrived array-based modeling languages out there. And the ConstraintStreams concept is really awesome (if a bit hard to grok at first).

Is there a convenient way to reach out to you or your team with questions? Do you accept outside pull requests?


Of course we accept pull request :) Do start with something small - big changesets are hard to get through the first review. We're on kie.zulipchat.com channel #optaplanner [1] if you want talk about such PR ideas.

As for community questions, the best place for public questions is stackoverflow [1]. For private questions and enterprise-grade support, contact my employer (Red Hat) about our paid support subscription, which pays 99% of the optaplanner's development work.

[1] https://www.optaplanner.org/community/getHelp.html


I think that one of the improvements is the ability to release memory to OS from VM when it's not needed. Might help with microservices.


How does Oracle licensing play into this? Are these improvements likely to show up in less encumbered Java versions?


Although there are lots of contributors to OpenJDK (which is where Java SE is built), Oracle continues to show up and do a large part of the work. This work is then used to build many distributions, including Oracle's JDK's. So the answer is yes these improvements will show up in most distributions.

But anybody who wants to use a JDK built by Oracle can choose between the Oracle OpenJDK build [1] with the GPLv2 w/ classpath exception license, and now have the option to use the OracleJDK free of charge in production with a one year overlap between LTS's [2].

[1] https://jdk.java.net/ [2] https://blogs.oracle.com/java/post/free-java-license


I'm curious how the Shenandoah GC stacks up here.


Optaplanner is very much not the type of workload that would benefit from Shenandoah, which is optimized for reduced pause times and low latency. Optaplanner's most useful solvers do lots of garbage collection, so throughput considerations dominate significantly over latency concerns. You'd use G1, ZGC, or Shenandoah for web apis or trading platforms, but not for a computation heavy solver that is expected to not produce useful results for minutes or hours.


Good clarification as I don't believe many JVM users realize that the various GCs have different use cases.


Agreed - but I 'll run the benchmarks anyway for a new blog post. Measuring is knowing.

OptaPlanner is best of with Parallel GC (high throughput), but most java processes (REST services etc) are much better off with G1GC (low latency) and its siblings (Shanandoah, Z GC).


Is it worth upgrading for any performance improvements for a typical Android/Flutter workflow? Any improvements in compile times, running the emulator, running Android Studio, etc. ?

How about gradle? I really dislike gradle.


On a tangent optaplanner looks really interesting. Does anybody know by any chance how it compares to OR tools [1]?

1: https://developers.google.com/optimization


I would like to see some workloads, not just GC times.


FTA: “Each run solves 11 planning problems with OptaPlanner, such as employee rostering, school timetabling and cloud optimization.”

How is that “just GC times”?



Yeah, I would like to see them to, especially when I see something like this:

> When we benchmarked JDK 15, we saw that Java 15 was 11.24% faster than Java 11. Now, the gain of Java 17 over Java 11 is less. Does that mean that Java 17 is slower than Java 15?

> Well, no. Java 17 is faster than Java 15 too. Those previous benchmarks were run on a different codebase (OptaPlanner 7.44 instead of 8.10). Don’t compare apples and oranges.

Author is defending their other article by discrediting their own methods - what those percent values even mean if they differ so much between different versions of the same benchmark? Shouldn't benchmarks be profiled for specific workload types to simulate real life examples? How could you possibly change the workload so much it affects result so significantly and not change the nature of your benchmark?


Good questions (I am the author). It's complex though.

1) They are not the same benchmark. The Java 15 blog used optaplanner 7.x with optaplanner-examples 7.x, which use scoreDRL. The Java 17 blog used 8.x versions, which use ConstraintStreams. That scoreDRL vs CS is a huge difference, which you see on the numbers (run on the same machine).

2) Remove the Machine Reassignment (B1 and B10) numbers, and it will look consistent: Java 17 is always better. The real question is why is every JDK's (11 too!) performance is predictable on most use cases, but horribly so on the Machine Reassignement case?

3) I need to share the numbers of the 3 raw runs - and probably do more runs - to clearly show and explain what's going on there. It looks to damn fishy.


Apparently it depends which GC is used, so:

> "0.37% faster than Java 16 for ParallelGC"

> "2.41% faster than Java 16 for G1GC (default)"

But as you asked, toy benchmark programs!

https://news.ycombinator.com/item?id=28530126


One common workload is Minecraft, which just happens to be completely incompatible if you have any mods to improve performance already.


woah, super interesting. you should test ZGC as well.


Still in the bottom half of most benchmarks. Still magnitudes slower than its market competitor (.NET)


I still have Java 8 installed. I'm not willing to accept Oracle's new Java license.


OpenJdk builds are GPLv2 with class path exception.

Oracle JDK builds are also now free:

https://blogs.oracle.com/java/post/free-java-license


>Oracle will provide these free releases and updates starting with Oracle JDK 17 and continue for one full year after the next LTS release. Prior versions are not affected by this change.

So, if that had applied to 11, it would lose updates from Oracle in 1 year from now. Whereas something like Corretto is currently committed to Java 11 updates until 2027 [0].

They seem to also be planning to increase frequency of LTS to every 2 years, so, basically assuming you upgrade to next LTS immediately (unlikely in any moderately complex app with deps) you have 3 years of free updates.

No thanks.

[0] https://aws.amazon.com/about-aws/whats-new/2020/08/amazon-co...


In my experience, going from 11 to any version above it is quite trivial. 8 to 11 can be tricky, but I believe even that is easier than it was two years ago (thanks to improved library support).


Thx, this is great news, but why the back and forth, they thought anyone would pay for something that has been free for 27 years?

Java does not belong to Oracle, it belongs to whom ever codes Java. I have still to find a company that understands open-source.


Oracle finances 90+% of the development of the free and open source OpenJDK, and just made their own OracleJDK completely free.

What is not free is support for a specific older version.


You shouldn't accept Oracle's new license, migrate to OpenJDK.


Which OpenJDK binary?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: