All I see are a few perf graphs that show a 20% runtime reduction in a few cases and > 50% reduction in one case. This gives me no insight whatsoever what is going on under the hood.
Is it that for these dozen or so benchmarks, we end up using > 4K and < 8K of heap? So the extra 20% time is just going into a memory allocation?
P.S. Interesting that I got two snarky comments asking for a basic question about Go. Does not bode well.