Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Apologies. Throughout my CS undergrad I had only been given the impression and understanding that Big-O measured worst case (lower bound, no worse than), Big-Theta average case, and Big-Omega best case (upper bound, no better than). Looking into it more now, I see that there are some more subtleties I either missed in class or was never taught.

Thanks for correcting me!



The subtlety here is basically that big-O and big-omega and friends are ways of characterizing functions, and functions map one input to one output. "Running time of a problem of size n" is not a function; it has a range of possible values for a given n. "Maximum running time of a problem of size n" is a function. That function itself, an² + bn + c for some constants a, b, and c, has lower and upper asymptotic bounds.

I thought you were right at first but then realized what was going on. This is a pretty subtle point and mostly uninteresting for well-understood algorithms like quicksort. But one slightly less subtle point is that big-theta isn't average case, it is the combination of big-O and big-omega, i.e., bounded from above and below (possibly with different constant factors) by the same asymptotic behavior.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: