> Most of the 1980-1995 cases could fit the entire datasets in CPU cache and be insanely fast.
They couldn't then. They had to fit it in RAM.
> Most things I query these days are in the gigabytes to terabytes range.
That still is in "fits in RAM on a typical PC" to "fits in SSD on a PC, fits in RAM on a server" range.
There's little excuse for the slowness of the current searching interfaces, even if your data is in gigabytes-to-terabytes. That's where the whole "a bunch of Unix tools on a single server will be an order of magnitude more efficient than your Hadoop cluster" articles came from.
They couldn't then. They had to fit it in RAM.
> Most things I query these days are in the gigabytes to terabytes range.
That still is in "fits in RAM on a typical PC" to "fits in SSD on a PC, fits in RAM on a server" range.
There's little excuse for the slowness of the current searching interfaces, even if your data is in gigabytes-to-terabytes. That's where the whole "a bunch of Unix tools on a single server will be an order of magnitude more efficient than your Hadoop cluster" articles came from.