You are correct, the point being made was that a lot of people aren't aware of that and just take it as gospel that O(1) algorithms must be faster than O(n) ones -- but, as you said, big-O notation only says that is true after n goes above some threshold n0 which may be (and usually is) very large.
After all, the most efficient (in terms of big-O) known algorithm for multiplying two numbers uses a 1729-dimensional Fourier transform and only becomes more efficient once the numbers involved are significantly larger than the number of atoms in the universe[1].
After all, the most efficient (in terms of big-O) known algorithm for multiplying two numbers uses a 1729-dimensional Fourier transform and only becomes more efficient once the numbers involved are significantly larger than the number of atoms in the universe[1].
[1]: https://en.wikipedia.org/wiki/Galactic_algorithm