Something I just have to point out after reading through the article and the comments there:
O(log_2 n) is the very same as any O(log_a n) where a is an arbitrary base. That is because log_a n = log_b n * c, where c is just a constant factor for any a and b and therefore c gets swallowed by Big O notation. I'll leave out a proof here but it's quite simple to deduct anyway. The implications of this are interesting: It basically means that it doesn't really matter how many subproblems a divide and conquer algorithm creates to recurse on, it always scales equally well, disregarding the factor.
However, the most important thing here for a programmer is, that Big O notation does not say anything about actual performance, since it swallows all constant factors. And those factors are important. That is exactly why we try to optimize the innermost loop in, say, a O(n^3) algorithm.
SAT for instance has only been solved with O(2^n) algorithms so far. The tricky thing here is that we don't quite know whether it could be done any quicker (google up P=NP for a more detailed description).
O(log_2 n) is the very same as any O(log_a n) where a is an arbitrary base. That is because log_a n = log_b n * c, where c is just a constant factor for any a and b and therefore c gets swallowed by Big O notation. I'll leave out a proof here but it's quite simple to deduct anyway. The implications of this are interesting: It basically means that it doesn't really matter how many subproblems a divide and conquer algorithm creates to recurse on, it always scales equally well, disregarding the factor.
However, the most important thing here for a programmer is, that Big O notation does not say anything about actual performance, since it swallows all constant factors. And those factors are important. That is exactly why we try to optimize the innermost loop in, say, a O(n^3) algorithm.