This comment makes perfect sense if load is a smooth function. But it is not. It tends to be a step function.
The most recent 2 data points give you is whether the problem is currently getting worse, getting better or steady. The third gives you a sense of whether it has been doing on a while.
No. Check out this video by Zach Tellman. He talks about queues and how they break down under load. One of the least intuitive things he points out is that when you have more processors, the breakdown tends to be more of a step function: everything is running smoothly till the moment that it isn't.
The point he makes arises from basic queue theory and is applicable to all kinds of systems, and how those systems react to load. It's got little to due with particular hardware and everything to do with basic math.
No. It depends on the fact that when something decides to go wrong, it frequently goes south fast. So, for example, a busy lock in a database goes from 99% of capacity (using very little resources) to 101%, processes start backing up, and the system goes haywire.
Think of it as being like traffic. Analytically it is easy to think of smoothly varying speeds. Reality is that there is a car accident, then a sudden traffic jam. We are poking around to figure out where and when that traffic jam happened. And sometimes the cars get cleared off the road and by the time we begin looking the jam is already evaporating.
So comparing the 1 min and 5 min load averages tell us whether the jam is getting worse, holding steady, or improving on its own. Looking at the 15 minute one tells us whether this happened recently.
Performance tends to degrade rather...rapidly when you start to actually meaningfully swap actual working memory. With modern quantitys of RAM I'd almost prefer to just run swapless and let the system OOM so it can just be rebooted and get on with it...
Linux doesn't handle this case well. You'll eventually get the OOM, but the thrashing is actually worse than you get via swap (it arrives more suddenly and causes more severe slowdown than swap thrashing, making it difficult to manually fix the problem). I think this is because the disk cache gets severely squeezed before the OOM killer actually gets invoked, but I'm not sure.
That could be accomplished with a set of two.
A set of three could in theory give you acceleration.