> Devirtualization + inlining + dead branch elimination is an instance of specialization
Your example did not involve any devirtualization.
> No. Some things are undecidable and/or would lead to code explosion. E.g., it's often hard to statically prove that a subroutine that takes an int would always receive the number 2, and specializing it for all ints would cause code explosion (again, contrived example but with real-world counterparts).
Again, your example did not have any such issues.
And if it was hard to statically prove a subroutine that takes an int always receives 2 then guess what? A JIT wouldn't do it, either. They don't actually specialize that much for runtime information because gathering that information would destroy performance. Devirtualization? Sure, limited in scope, and huge payoffs to be had for doing it. All arguments analyzed & binned? Good god no. That'd be ludicrous in resource usage required, and the payoffs would be incredibly hard to quantify in any meaningful way.
Take your example - all it would do is eliminate a single branch. If it was so common to take a single avenue that it could be eliminated, then pretty much every branch predictor would do it anyway. So... why specialize it?
> Take your example - all it would do is eliminate a single branch. If it was so common to take a single avenue that it could be eliminated, then pretty much every branch predictor would do it anyway. So... why specialize it?
I don't think you've actually read the comment. The particular example is just a demonstration of a principle. Obviously that particular case is too simple to be interesting.
Uh, what? That's just a basic constant fold optimization that can be statically handled? It even has examples showing anything that would require runtime information (like mutable strings) don't get optimized.
> That's just a basic constant fold optimization that can be statically handled
Incorrect. No static analysis can establish the constant in some of those cases if only because they are not provably constant. If the situation changes at runtime, those routines would need to be deoptimized and recompiled.
> It even has examples showing anything that would require runtime information (like mutable strings) don't get optimized.
The example does not show that "anything that would require runtime information don't get optimized" just that that case doesn't. I don't know Ruby, but I can imagine why guarding the constness of a mutable array may not be advisable.
But, anyway, my points isn't about partial-evaluation of values specifically (although JITs can and do do that); instead of int and 2 you can consider a type with 4 billion subtypes. JITs do specializations that an AOT simply cannot possibly do (not without code explosion).
Now, as I wrote in the original comment, I'm definitely not saying that JIT compilation is "better" than AOT. I am saying that the "zero-cost abstraction" philosophy of C++/Rust and the "zero-cost use" philosophy of JITs are two extremes that are each perfectly suited to two very different software domains.
V8 uses runtime type/value information to specialize methods in some cases[1]. I find it hard to believe that HotSpot doesn’t either.
Obviously complex objects are impossible to handle, but branching on an integer that’s always the same value isn’t. Same for constant folding, if an argument is determined to be constant at runtime then that’s going to affect the choices the JIT makes.
That's specializing on type, not on a given specific value. Being JS the value influences the type, but it's only looking at a coarse binning (int vs. double vs. string etc..) not specializing for a single specific value. A form of de-virtualization if you will.
Hotspot wouldn't even need to bother with that most of the time since it's primarily running statically typed languages for which that doesn't apply in the first place.
For your example? Yes, it was.
> Devirtualization + inlining + dead branch elimination is an instance of specialization
Your example did not involve any devirtualization.
> No. Some things are undecidable and/or would lead to code explosion. E.g., it's often hard to statically prove that a subroutine that takes an int would always receive the number 2, and specializing it for all ints would cause code explosion (again, contrived example but with real-world counterparts).
Again, your example did not have any such issues.
And if it was hard to statically prove a subroutine that takes an int always receives 2 then guess what? A JIT wouldn't do it, either. They don't actually specialize that much for runtime information because gathering that information would destroy performance. Devirtualization? Sure, limited in scope, and huge payoffs to be had for doing it. All arguments analyzed & binned? Good god no. That'd be ludicrous in resource usage required, and the payoffs would be incredibly hard to quantify in any meaningful way.
Take your example - all it would do is eliminate a single branch. If it was so common to take a single avenue that it could be eliminated, then pretty much every branch predictor would do it anyway. So... why specialize it?