Aggressive GC latency improvements, like the ones Go is making, virtually always negatively affect throughput. For example, Azul C4 has lower throughput than HotSpot (at least per the numbers cited in the paper). There's no free lunch in GC.
But I would argue that most people who use Go, use it to write user-facing server apps, or at least server apps in which response time is an important metric. I don't know anybody who uses Go to primarily write batch jobs where throughput matters more than latency.