It's a smart, easy optimization, so I'd be surprised if .NET wasn't using it, but ultimately it has the same effect as reducing the frequency of GC pauses. It doesn't lead to faster execution. It doesn't change the fact that the code didn't pass through a more thorough analyzer/optimizer. How good are JIT compilers at automatic vectorization, for example? Opportunities for automatic vectorization could be encoded into bytecode such that the SIMD capabilities of different architectures could be used, but I don't think .NET does that.