there used to be some hard and fast rules about scaling vertically in sysadmin circles.
You can see where they come from when you consider some pretty static numbers in systems: like that page sizes are 4k, block sizes are usually pretty standard (512b or 4k) and network interfaces (at least the throughput) haven't increased for a decade or more.
Some of those rules need to be challenged as the technologies or systems change over time. 20M rows in a highly updated table should be an old "rule" though, at the latest from 2010, when mech drives were popular, because having your entire table index fitting on a few sectors on physical disk made a pretty substantial performance difference. Indexes need to flush to disk on update. (or they used to)
You can see where they come from when you consider some pretty static numbers in systems: like that page sizes are 4k, block sizes are usually pretty standard (512b or 4k) and network interfaces (at least the throughput) haven't increased for a decade or more.
Some of those rules need to be challenged as the technologies or systems change over time. 20M rows in a highly updated table should be an old "rule" though, at the latest from 2010, when mech drives were popular, because having your entire table index fitting on a few sectors on physical disk made a pretty substantial performance difference. Indexes need to flush to disk on update. (or they used to)