I don't know the granularity on which the deoptimization guards get inserted, but the basic idea would be that if the constant changes, any code running that depended on the value of that constant will have to deoptimize.
In your example, we would never enter the optimized code path without knowing that the constant's still the same as last time. Since no loop is emitted, there's no deoptimization mid-loop required. So we either run the optimized code, or we never get to the optimized code.
If a deopt is required mid-loop, Hotspot inserts a "trap" that would see the deopt command and immediately branch to the interpreter with the current state.
Of course there's limits to how far optimization can go, but having a way to tell Hotspot that there's a known-immutable constant object reference at this point in the code opens up a lot of opportunities.
In your example, we would never enter the optimized code path without knowing that the constant's still the same as last time. Since no loop is emitted, there's no deoptimization mid-loop required. So we either run the optimized code, or we never get to the optimized code.
If a deopt is required mid-loop, Hotspot inserts a "trap" that would see the deopt command and immediately branch to the interpreter with the current state.
Of course there's limits to how far optimization can go, but having a way to tell Hotspot that there's a known-immutable constant object reference at this point in the code opens up a lot of opportunities.