Another common one is see is conflating constant-time with no-time. It's near enough to true for a single lookup, but 30,000 individuals lookups in a tight loop and you've got some real performance issues that are relatively invisible.
Hash tables also effectively randomize your access patterns.
That can be bad for memory locality, especially if you outgrow your main memory (or just various smaller caches before that.) All without changing the O(1) w.h.p. asymptotics.
Usually you pay the memory locality price twice too, once for the index and once for the actual data, especially in OO languages. Thankfully fitting in main ram has never been an issue for me, but I've seen the results when some of those fetches are in a database or external memory cache.
One standard example I use is finding duplicates in a big collection of ite s either via a hashtable or via merge sort. The collection would be big enough not to fit into RAM, so you get swapping onto a hard drive.
Merge sort will mostly just work, because all its memory access patterns are sequential. Hashtables would need to pay for seeks all the time.
So O(n log n) mergesort would be faster than O(n) hashtable approach in this case.
(There are ways to improve the absolute running time further compared to these naive approaches.)