I've written this kind of code myself, where you measure a time delta and divide something by the delta. It's always something that sticks out though, that you might divide by zero (especially if you did it in Java!).
The article says it would have been picked up in code review, and I agree. But it just seems odd that it wasn't changed right there. Why not just write to loop so that it keeps looping as long as the divisor is below some number like 10ms? You also want to minimise the estimation error, which is easier to do if you divide by a slightly larger number. Consider a loop that takes between 1 and 2ms to finish, your estimate will be either x or 2x.
The article says it would have been picked up in code review, and I agree. But it just seems odd that it wasn't changed right there. Why not just write to loop so that it keeps looping as long as the divisor is below some number like 10ms? You also want to minimise the estimation error, which is easier to do if you divide by a slightly larger number. Consider a loop that takes between 1 and 2ms to finish, your estimate will be either x or 2x.