Hacker News new | past | comments | ask | show | jobs | submit login

Then read the three links pointing above?



All I see are a few perf graphs that show a 20% runtime reduction in a few cases and > 50% reduction in one case. This gives me no insight whatsoever what is going on under the hood.

Is it that for these dozen or so benchmarks, we end up using > 4K and < 8K of heap? So the extra 20% time is just going into a memory allocation?

P.S. Interesting that I got two snarky comments asking for a basic question about Go. Does not bode well.


>Is it that for these dozen or so benchmarks, we end up using > 4K and < 8K of heap?

stack, not heap. We are talking about changing the default stack size.

>So the extra 20% time is just going into a memory allocation?

Yes and the book keeping overhead involved at the OS and runtime levels.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: