Hacker News new | past | comments | ask | show | jobs | submit login

It seems like this should just reduce to the question: given two natural numbers x and y represented by n bits how many bits are needed to represent the difference sqrt(y) - sqrt(x) so that it is non-zero when x != y. To see this suppose you compute each sqrt in both lists to k bits then group together the closest matching pairs from the two lists. Some of these pairs may have a difference of zero. Now increase k until all of the pairs are distinguished (have non-zero differences). At that point you know the answer, you just have to add up the differences.

As for the first question it should take no more than n bits because sqrt(x) is a contraction at least for x >= 1 as is obvious from its graph compared to that of f(x) = x.




> At that point you know the answer, you just have to add up the differences.

What if your sum in this step is zero?


There's a known upper bound for the error on each pair delta caused by the finite number of bits. So that gives upper and lower bounds for the error on the sum of the deltas. Add enough bits to your representation to account for that error. I think that's an additional log N + 1 bits where N is the sequence length. Then if the sum comes out to zero that must be the correct answer (probably impossible unless the two sequences contain the same values).


> Add enough bits to your representation to account for that error.

I've reviewed a few papers where authors respond with this sort of argument. It never ends in a publication. But, I really should have picked on the preceding sentence:

> Now increase k until all of the pairs are distinguished (have non-zero differences).

Do you have an estimate on the bounds, time or space, required for that? Because establishing those bounds is the "hard problem" here.

Also, what do you do about questions like sqrt(2)+sqrt(50) =? sqrt(72) where the sums have differing numbers of terms?

And, it might be worthwhile to play with an example: (55, 77, 83) and (64, 68, 82) -- depending on where you decide to truncate, the difference wobbles above, below and equal to zero, well past the point you've described where the pairwise differences are nonzero (4 bits and the sums appear to be equal). It finally stabilizes after 25 bits -- much greater than log N + 1.


OK, I think I understand the problem now. It's that the actual answer can be very small but non-zero. You can put error bars on your calculations but that doesn't necessarily tell you which side of zero the answer is on. In fact you would need to do the calculation with enough bits to represent the true answer which you don't know a priori. What you can do though is put an upper bound on the size of the difference between the two sums and you can make that bound as small as you want by using more bits.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: