Hacker News new | past | comments | ask | show | jobs | submit login

Agree with your point, but average latency isn't as simple as inverse of throughput, even on a serial processor.

Imagine a process that takes in a request, sleeps for 10s, and then provides a response. If taking in 1 million req/s, it can still provide 1 million responses/s for a throughput of 1 million req/s. Average latency is 10s.

Approximating latency as 1/throughput is only valid on a process that only handles 1 request at a time (no concurrency). I doubt this is the case for Go.

Latency impacts user happiness (did the page load quickly?). Throughput impacts operating costs (I need to buy N% more servers to serve as many requests with Go 1.8 as I did with 1.7.5).

From the original GitHub issue:

             Thread Stats   Avg      Stdev     Max   +/- Stdev
    Go 1.8rc3 Latency   192.49us  451.74us  15.14ms   95.02%
    Go 1.75   Latency   210.16us  528.53us  14.78ms   94.13%
Go 1.8rc3 has both a lower mean latency and lower standard deviation than Go 1.7.5. Go 1.8 decreased latency at cost of decreased throughput.



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: