Hacker News new | past | comments | ask | show | jobs | submit login

It really depends on what level you're working on improving. It's effectively queues all the way down, but programmers hate reading statistics. E.g. a server process is a series of queues between your TCP socket to your process to your disk, CPU's reorder buffer, and scheduler.

You have three areas to study:

1. Measurement - makes you define the performance you're looking for and measure it. Until you do this it's mostly a bullshit "make people stop complaining about performance" errand that's too wishy washy to do with more than a few stabs in the dark. With containers and decent capture of samples of your load, a benchmark is pretty straightforward to set up.

2. Modeling - these models are usually little more than measured rates and latencies applied to Little's Law. Pocket-calculator math is often good enough. At worst, an M/M/1 queue.

3. Instrumentation - Figuring out how to attribute your computer's resources (memory, CPU time, iops, etc) to different parts of your code. Tracing libraries, Linux perf, and ebpf can be useful here.

There are a decent number of computers performance books. I like the ones by Jain (great, but AFAICT out of print) and Harchol-Baltar. For work, you shouldn't read them straight through but iterate through parts as you better understand the problem you're trying to solve and start choosing strategies. For the tactical side. Brendon Gregg (sp?) has some decent measurement tool books. Figure out what you want to improve and how to measure that. Then start attributing the existing performance to implementation choices that you can control. Then control those choices (e.g. change algorithm, load balance better, make design trade-offs) to improve performance.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: