Because you need to optimize for reads. This setup doesn't make it easy to, say, select only the first 5.
I'd say, unless the list is really large, just change all the "pos" values in a concatenated set of update queries (transactioned of course). Also, don't forget the unique constraint on whatever makes the list unique + the pos.
If you have a hashtable-based index on the PK, lookup by it is o(1) so getting a list of first k elements is o(k).
So what you'll end up with is:
Insert/Update/Delete - o(1) (insert/update/delete new row, update up to 2 other rows -- each individual operation is o(1) and there're fixed number of them)
Get list of first k - o(k)
Can't be better than this?
Elegant, fast, robust, not inventing a wheel?
It's less about whether the index is fast, and more about performing self joins (often hierarchical depending upon the DB engine and what's supported) with arbitrary count. When it comes to DB queries, n queries is not really acceptable for a page of n items.
That's not how databases are generally optimized. You're essentially doing random access, extracting a single tuple each time, in a datastructure optimized for locality. Think of it more like a memory-mapped region.
Nowhere in that article there was anything about hardware-related issues. I believe, what you are talking about has to do with how HDD store db files on the disk, and how it's (much) faster to sequentially read from such disk vs. seek for every record. Hence all sort of optimizations that DBs do - like b+trees, etc. If your dataset fits in memory - it's a nonissue, the engine will never reach for the disk anyway. Even if it doesn't fit in memory (in which case I would start all optimizations by adding more memory, if possible), it's much less of an issue with SSDs. But again - article has more science-ey tone than applied/practical. In some scenarios (large dataset on HDDs) random seek may be an issue.
I'd say, unless the list is really large, just change all the "pos" values in a concatenated set of update queries (transactioned of course). Also, don't forget the unique constraint on whatever makes the list unique + the pos.