Hacker News new | past | comments | ask | show | jobs | submit login
A Beginner’s Guide to Big O Notation (rob-bell.net)
5 points by cdl on Aug 22, 2012 | hide | past | favorite | 3 comments



Something I just have to point out after reading through the article and the comments there:

O(log_2 n) is the very same as any O(log_a n) where a is an arbitrary base. That is because log_a n = log_b n * c, where c is just a constant factor for any a and b and therefore c gets swallowed by Big O notation. I'll leave out a proof here but it's quite simple to deduct anyway. The implications of this are interesting: It basically means that it doesn't really matter how many subproblems a divide and conquer algorithm creates to recurse on, it always scales equally well, disregarding the factor.

However, the most important thing here for a programmer is, that Big O notation does not say anything about actual performance, since it swallows all constant factors. And those factors are important. That is exactly why we try to optimize the innermost loop in, say, a O(n^3) algorithm.


Besides brute force decryption, does anyone have some good examples of O(2^N) algorithms?

Also: http://www.kurzweilai.net/software-progress-beats-moores-law Algorithmic improvement account for more of the acceleration in computing power than physical factors, such as transistor density.


SAT for instance has only been solved with O(2^n) algorithms so far. The tricky thing here is that we don't quite know whether it could be done any quicker (google up P=NP for a more detailed description).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: