A minor quibble - it is very difficult to find a paradigm in which the running time of the proposed algorithm differs from simply choosing the best of the obvious algorithms.
Line at a time takes O( max(w,h) ). Binary searching each row and column separately takes O( min(w,h) log(max(w,h) ). Therefore, implementing both basic algorithms and using the faster one at run-time will run in time O(min(max(w,h), min(w,h)log(max(w,h)).
Which is very close to the proposed run-time - unfortunately so much so that it is very hard to find a sequence of (w,h) for which the proposed algorithm works asymptotically better than simply doing "the obvious thing".
But it is possible! And the lower bounds are nice!
I (the author) actually did lose a lot of time trying to prove those weren't the same bound! Eventually I found a variable assignment that made it obvious:
let w = n/log(n)
let h = n
(Note: I use ↑ and ↓ as binary operators for max and min.)
(Note: @= means asymptotically equal. Analogous for @<.)
'switching'
@= (w ↑ h) ↓ ((w ↓ h) log(w ↑ h))
@= (n/log(n) ↑ n) ↓ ((n/log(n) ↓ n) log(n/log(n) ↑ n))
@= n ↓ (n/log(n) log(n))
@= n
'lower'
@= ((w ↓ h) log((w ↑ h)/(w ↓ h))
@= n/log(n) log(n/(n/log(n))
@= n log(log(n)) / log(n)
@< n
if height < width:
result = search_sorted_matrix(zip(*matrix), target)
return result and result[::-1]
Does zip(* matrix) complete in constant time, or time proportional to w*h? I was only able to get away with transposing because I used a LazyTranpose method that finished in constant time and added only constant overhead to each access.
I noticed that you noticed col_max_row was redundant with max_row. I was wondering if anyone would. (It used to not be, when I wasn't taking advantage of eliminating rows during the binary search. It made the animations look really dumb, so I fixed that but didn't notice the redundancy.)
Oh, lastly, some of your tests are brittle. search_sorted_matrix(example, 1) is not required to return (0, 0). Tweaking the implementation could conceivable change that to (0,1) or (0, 2). What's important is that M[result] == query.
Yeah, zip is w*h. I meant to replace it once everything was working but forgot. It's simple to write a "TransposedMatrix" wrapper that only adds constant overhead:
class TransposedMatrix(object):
def __init__(self, matrix, index = None):
self.matrix = matrix
self.index = index
def __getitem__(self, index):
if self.index is None:
return TransposedMatrix(self.matrix, index)
return self.matrix[index][self.index]
def __len__(self):
if self.index is None:
return len(self.matrix[0])
return len(self.matrix)
and I've updated the gist to do that. Thanks for pointing it out.
The tests are definitely quick & dirty; I'm aware there can be more than one correct answer.
Great article. I think considering lower bounds and building intuition of a problem before attempting to solve it is a major step in algorithm design lots of people miss. If you don't really know your problem well, how can you ever be certain that your solution is good, let alone optimal!
This is a common technique of switching between two algorithms. iirc, I never saw this as an material in undergrad algorithms course.
I wrote on another problem that uses this technique. Mix ternary search and linear search to find the minima of an first decreasing then increasing array.
A minor quibble - it is very difficult to find a paradigm in which the running time of the proposed algorithm differs from simply choosing the best of the obvious algorithms.
Line at a time takes O( max(w,h) ). Binary searching each row and column separately takes O( min(w,h) log(max(w,h) ). Therefore, implementing both basic algorithms and using the faster one at run-time will run in time O(min(max(w,h), min(w,h)log(max(w,h)).
Which is very close to the proposed run-time - unfortunately so much so that it is very hard to find a sequence of (w,h) for which the proposed algorithm works asymptotically better than simply doing "the obvious thing".
But it is possible! And the lower bounds are nice!