The distribution of new Java versions is already decoupled from tzdata updates.
If you want to update your tzdata then you should use the tzupdater tool that is shipped with Java. This has been updated to support JSR-310. If you want to perform the updates then this is just a commandline tool and can be integrated into chef/puppet etc. as needed.
There was discussion during the development of JSR-310 about the inclusion of Java library methods to update a running JVM. There are a load of "not obvious to a developer" things that happen if you do this and it was decided the technical complexity and potential confusion outweighed the benefit of implementing it.
That's what I was referring to, though, as being operationally expensive. I need to ensure we're using up-to-date tzdata, but there could be multiple different runtimes running on hosts in locations I'm not aware of.
All I really wanted to say is that I want to be able to do a file drop of all of the binary data in a well-known location on all hosts and have all of the relevant languages / runtimes point to those files without running any additional tools, etc. Complexity starts multiplying once you add these external processes because of additional failure modes -- e.g., someone can run the tool manually and make things out of sync, no tools to monitor that things do stay in sync, handling backout of bad deployed data, installing a new not-yet-updated JRE version to a machine.
If you want to update your tzdata then you should use the tzupdater tool that is shipped with Java. This has been updated to support JSR-310. If you want to perform the updates then this is just a commandline tool and can be integrated into chef/puppet etc. as needed.
There was discussion during the development of JSR-310 about the inclusion of Java library methods to update a running JVM. There are a load of "not obvious to a developer" things that happen if you do this and it was decided the technical complexity and potential confusion outweighed the benefit of implementing it.