Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Runtime errors seem a little bit fussy, but "try a fast approach that usually works, detect worst-case behavior, then fall back to a thing that's usually slower but avoids the worst case" is a common implementation pattern (think introsort). In principle, you could start with something trivial as your hash, then rehash your key with a better-behaved-but-slower function if your normal probing strategy is going on far longer than it ought to given the load factor.

But we rarely see that, and there are probably good reasons. Hash tables are more rarely the bottleneck in the real world than in benchmarks, and when they are an issue, other factors (size, concurrency, weird pathologies, mem latency) may matter more often than hashing time.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: