lmbench was trying to give you both bandwidths and latencies of everything in a computer system, not just memory. This is one place where it is actually worth memorizing the rough numbers, they help immensely when you are sketching out a design. lmbench tried to give you insight into latency/bw of network, disks, memory, etc.
Do you know, to the nearest order of magnitude
round trip time over the network
bandwidth over your data center's network
seek time end to end (not the silly 1/3rd seek / no rot delay)
bandwidth at the fast end
bandwidth at the slow end
memory read / write / bcopy bandwidth
random read from memory
I've sat in hundreds of design meetings where someone was claiming they'd build an XYZ server that did $BIG_NUMBER of ops/sec and I'd just stare at the ceiling, think about the network, and go "Maybe" or "not possible" based on the numbers. There must have been some time that I was wrong but I don't remember it.
It's somewhat harder today because all the big machines have multiple cores so it's not as simple as knowing what 1 core, with 1 network interface can do, but you should at least know that, you can assume linear scaling as an upper bound and have some idea of the capacity of the machine.
It's always amazed me how easy it is to get a feel for these numbers and yet many people in the biz don't take the time to do so. I can only guess they think it is harder than it actually is.
Latency does not prevent X opps/second. It's not that hard to build a server that does a simple transaction 100,000 times a second. Keeping synchronized across several machines at that throughput level can be next to impossible.
lmbench was trying to give you both bandwidths and latencies of everything in a computer system, not just memory. This is one place where it is actually worth memorizing the rough numbers, they help immensely when you are sketching out a design. lmbench tried to give you insight into latency/bw of network, disks, memory, etc.
Do you know, to the nearest order of magnitude
round trip time over the network
bandwidth over your data center's network
seek time end to end (not the silly 1/3rd seek / no rot delay)
bandwidth at the fast end
bandwidth at the slow end
memory read / write / bcopy bandwidth
random read from memory
I've sat in hundreds of design meetings where someone was claiming they'd build an XYZ server that did $BIG_NUMBER of ops/sec and I'd just stare at the ceiling, think about the network, and go "Maybe" or "not possible" based on the numbers. There must have been some time that I was wrong but I don't remember it.
It's somewhat harder today because all the big machines have multiple cores so it's not as simple as knowing what 1 core, with 1 network interface can do, but you should at least know that, you can assume linear scaling as an upper bound and have some idea of the capacity of the machine.
It's always amazed me how easy it is to get a feel for these numbers and yet many people in the biz don't take the time to do so. I can only guess they think it is harder than it actually is.