Integer arithmetic is really quantized logarithmic complexity. If your hardware has a bucket your number fits into, you're calculating n+1 or nxn in constant time. But if your data set doubles in size (especially for multiplication) you may find yourself in a bucket that doesn't exist or a more expensive one. Contemporary code is more likely to reach for bignum which is logn, but again stairstepped to each number of integers it takes to represent the entire number. A bigger problem when your data set is sparse, so that values grow faster than population (eg, UUID).
But on the topic of hash tables, you can only reach 'O(1)' if you can avoid collisions. And to avoid collisions you need a key of length m, which grows as n grows. You cannot put 51 pigeons in 50 pigeonholes. So the key length of your hash keys is m >= logn, which means the time to compare keys is also logn, which means hash tables are never actually O(1) access or insert time. Actual access time for a hash table on non-imaginary hardware is √nlogn, not O(1).
If you consider that we have applications that are just a single hash table occupying the entire RAM of a single machine, then this is not picking nits. It's capacity planning.
You cannot put 51 pigeons in 50 pigeonholes. So the key length of your hash keys is m >= logn, which means the time to compare keys is also logn, which means hash tables are never actually O(1) access or insert time.
I am not sure I am following this argument. You are not going to have more than 2^64 pigeons and pigeonholes on any system soon and I almost dare to say you will never ever get to 2^128. And for 64 or 128 bit keys comparisons and many other operations are for all practical purposes constant time. I guess you could argue that this is sweeping a factor of log(n) under the rug because of things like carry chains which could be faster for smaller bit sizes but I am not sure that this is really useful on common hardware, an addition will take one clock cycle independent of the operand values.
Sure, they can be and are often longer, but not because you were forced to use long keys, it just happened that the thing you want to store in a hash table is a long string. The way you worded it made it sound to me like you were saying that one has to use long keys, not that in practice one often has to deal with long keys. But even then I am not convinced that this should give an additional factor in the complexity analyses, I think I would argue, at least in some situations, that calculating hashes of long keys should still be considered constant time, that for the longest keys. But I can also imagine to take this into account if the key length is not only big but also highly variable.
But on the topic of hash tables, you can only reach 'O(1)' if you can avoid collisions. And to avoid collisions you need a key of length m, which grows as n grows. You cannot put 51 pigeons in 50 pigeonholes. So the key length of your hash keys is m >= logn, which means the time to compare keys is also logn, which means hash tables are never actually O(1) access or insert time. Actual access time for a hash table on non-imaginary hardware is √nlogn, not O(1).
If you consider that we have applications that are just a single hash table occupying the entire RAM of a single machine, then this is not picking nits. It's capacity planning.