Hacker News new | past | comments | ask | show | jobs | submit login
Non-computability of solutions of certain equations on digital computers (2022) (arxiv.org)
88 points by lisper 4 months ago | hide | past | favorite | 102 comments



Happy to see this made it to the front page. I submitted it because I'm hoping someone here will be able to understand it well enough to explain its significance. Is this Big News or a parlor trick? If it really is an example of an uncomputable physical process, that would seem to me to be earth-shattering, and yet this result does not appear to have made a splash. It is based on a construction of J. Myhill of a continuous recursive (and hence computable) function whose derivative is not computable. The proof is non-constructive, and it assumes the existence of a partially decidable set called A which somehow (this is the part I don't quite understand) results in a computable function despite the fact that its derivative can't be computable because that would imply A is decidable. Any clues would be much appreciated.

[UPDATE] Let me put this more succinctly: how is it possible that there can be a continuous computable function whose derivative is uncomputable? What exactly is it about this function that makes numerical differentiation fail?

[UPDATE2] Here is a link to the Myhill paper for easy reference.

https://projecteuclid.org/download/pdf_1/euclid.mmj/10290006...

It's remarkably short, only two pages.


> how is it possible that there can be a continuous computable function whose derivative is uncomputable?

For others: The original paper is short and very readable - https://projecteuclid.org/journalArticle/Download?urlId=10.1...

The intuition seems to be that computability of a function is actually approximability, and derivatives can be large even when the function is very small, by having the large slope confined to a very small area. So by constructing a function f that contains an uncomputable (but recursively enumerable!) set encoded in the derivative by strategically placing little bumps, but having the bumps grow exponentially smaller, Myhill was able to construct a function that was easily approximable but whose derivative wasn't.

The key detail here is that the bumps grow exponentially smaller in the order the uncomputable set appears in its enumeration. This allows Myhill to construct a sequence of computable functions that (uniformly) approximate f to within 2^(-n). If the bumps weren't ordered in this way, we wouldn't know how far down the sequence to go to approximate f to within 2^(-n). The ordering lets us just look at the first n elements of the uncomputable set.

Why doesn't this allow us to approximate the derivative? Because at any x, we don't know how well we have to approximate f in order to approximate f'(x), so we need knowledge of the entire uncomputable set, which we cannot have.


> The key detail here is that the bumps grow exponentially smaller

Pedantically, it's that the bumps grow exponentially narrower.

IIUC, a simpler (but more nitpickable) version would be f'(n+1/y) = [the fraction of n-state Turing machines that halt within y steps] (for positive integers n, and 0 elsewhere); then f is the integral of f'. The value of f(x) can be approximated arbitrarily well by running every floor(x)-state Turing machine for a large finite number of steps[2], but f'(x) is equal to Chaitin's constant[0] for n-state Turing machines over a strictly-positive-length[1] subinterval of [n,n+1).

A much simpler version is f'(x) = {-1 if x<0; +1 if x>0; Chaitin's constant if x=0}. f(x) = abs(x).

Basically, this isn't even a parlor trick. It's a nothing burger, disguised as something by means of a parlor trick.

0: https://en.wikipedia.org/wiki/Chaitin%27s_constant

1: And so strictly-containing-rational-numbers, although predicting which rational numbers is nontrivial.

2: The rounding error from assuming all machines that halt after y steps actually run forever scales as O(1/y) in the worst case.


> Pedantically, it's that the bumps grow exponentially narrower.

Has to be smaller in both directions, right? It has to be vertically smaller to allow f to be computable, but horizontally smaller to keep f' large.

> IIUC, a simpler (but more nitpickable) version would be...

Here I think your f as defined is identically zero, and so you don't get f' by differentiating.

> A much simpler version is f'(x) = {-1 if x<0; +1 if x>0; Chaitin's constant if x=0}. f(x) = abs(x).

Similarly, differentiating abs(x) doesn't get you that function.

But I see where you're coming from, and I agree this doesn't tell us anything profound about computability. That said, it is a cute result that differentiable functions are flexible enough to admit this kind of construction, and that's straightforward but not obviously trivial, as your attempts somewhat ironically show. I think Myhill's proof is pretty close to a minimal working example and trying to fix your examples would result in something that looked very similar.


> A much simpler version is f'(x) = {-1 if x<0; +1 if x>0; Chaitin's constant if x=0}. f(x) = abs(x).

This is wrong. The function f you've defined is not continuously differentiable. The one in Myhill's paper is.


Evidently I should have been more explicit:

> > A much simpler[-and-more-nitpickable] version

Myhill uses a parlor trick to make his version continuously differentiable. It's a pretty good parlor trick, but it doesn't have much to do with computability.


Is it specifically the fact that it's a physical process that makes it earth shattering?

Physical models of computers ( https://en.wikipedia.org/wiki/Billiard-ball_computer ) have been around for a long time, which means that noncomputability has been tangible for a long time.

I might be numb, though, to the fact that much of what want to understand can't be. If the universe isn't discrete, then it seems like noncomputability is the default.


> it seems like noncomputability is the default

compare https://en.wikipedia.org/wiki/Weierstrass_function#Density_o... (1872 et seq)?


Hey, I feel you, my first thought was that continuously differentiable functions aren't physical :)


The first order theories of the real numbers using addition and multiplication are decidable [0], while FO theories of the integers under the same operations are not.

[0] https://en.wikipedia.org/wiki/Decidability_of_first-order_th...



The relevant book Computability in Analysis and Physics by Pour-El and Richards is from 1989 but it is quite readable. The main requirements are a strong mathematical background (mostly in analysis and PDEs and an acquaintance with computability theory) and perhaps some appreciation for physics.


I wonder to what degree this is an artifact of the definition of "computable real" in common use (originally sure to Turing I believe), which is a Turing machine that outputs successive digits of your real number.

The point, I think, is that this gives you a good model for doing mathematics - it's a simple definition, and it allows you to talk about real numbers to arbitrary precision in arbitrary computational problems. But the downside is it's a minefield of subtle logical issues - addition is non-computable! So is the order relation, and equality, and a host of other basic stuff. It's not surprising that a complex limit-based process like taking a derivative would be also be non-computable.

On the other hand physicists want to model reality, which is a totally different set of constraints. I suspect that, as mathematicians, we've taken a logical construction that maps reasonably well onto the universe in some cases (real numbers) and stretched it far beyond its applicability there, because it leads to interesting mathematics. Physicists, if they ever find the cracks, will just come up with a better abstraction and move on.

Anyway, not sure if I'm making sense here.


> It's not surprising that a complex limit-based process like taking a derivative would be also be non-computable.

The reason it was surprising to me is that it would naively imply that an analog computer could compute something that a TM could not. If that were true, it would be Big News. But the construction of a computable function whose derivative is uncomputable is such that this function cannot be physically realized because you run into quantum effects.

Also, you might find this interesting:

https://www.sciencedirect.com/science/article/pii/S030439752...


Nice, thanks for the link. I knew that "printable" and "computable" reals were equivalent logically but it didn't occur to me that the equivalence itself would be uncomputable! This is exactly what I mean about a minefield of subtlety haha.

But to my point, I don't even think the examples in the OP paper are that convincing to me. Sure, you can construct examples of circuits with computable inputs and non-computable outputs, but this basically comes down to fractal-like driving behaviour that can't be approximated at any scale correctly. It's an artifact of the mathematics, which lets you do this kind of thing to an input signal, but at that point (my belief is that) you've left the realm of physics/reality and are now chasing pure logical implications.

It's like extrapolating from the edge of a map and concluding that there's no ocean because your original map had no water in it. Actually, you just needed a different map.


Yes, I think you have it exactly right.


A weaker but easier result is that the differentiation operator is uncomputable: Define f_t(x) = t arctan(x/t). As t -> 0, f_t -> 0. But notice that f'_t(0) = 1.

By contrast, the integration operator is computable. Why? Because a piece of code that defines a function f over the real numbers can be generalised to interval numbers (see "interval arithmetic") -- it then follows that the upper and lower Darboux sums are both computable. QED

[edit] Changed oo to 0.


Thanks!

I think you don't even need arctan for this. Just put one of Myhill's "bumps" at x=0 and shrink it down. That function approaches f(x)=0 for all x but its derivative at x=0 remains 1.


The arctan has the advantage that it's differentiable infinitely many times over R.


I would consider the Myhill proof to be constructive, curious why you don't think so?

Set A as desired that is non-recursive, recursively enumerable is conceptually easy to construct: let A be the binary encoding of all Turing Machines that halt. This is well known not to be recursive (computable), but you can recursively enumerate them by taking a bijection from s: N -> NxN (possible since NxN is known to be countable) and on step i emit M if Turing Machine with binary encoding M halts on step B, where s(i)=(M,B).

The resulting Myhill function f is indeed computable and derivative non-computable, but it is highly dubious that such a function would be physically realizable. Why and how would we expect such a function to possibly arise in nature, based on a non-computable set A? For one thing, it's second derivative diverges so hard it can't be bounded as x -> 0 by any computable function. Also, one very quickly is dealing with x and y scopes far below Plank limits.


> curious why you don't think so?

Uh, because Myhill says so? "We first define the function f non-constructively..."

But I agree with your analysis. It seems constructive to me too.

I don't understand his proof that f is recursive though.


Ah, I think the non-constructive comment is just about presentation order; he gives a sketch of how f works first then later he gives an explicit formula partway through page 2 after theta, alpha, beta, and h are defined.

For why f is recursive (computable), here's maybe a helpful way of thinking about it: Let's think about the halting behavior of an input Turing Machine M. Consider the following pseudocode infinitary program which "outputs" two variables A_M and B_M after infinite time:

  for each integer n:
    if M halts on step n:
      return A_M = 2^(-n) ; B_M = 1
  if M never halts, return A_M = 0 ; B_M = 0
Clearly, B_M is not computable; that's the halting problem. However, A_M is actually computable, as computability of real-valued functions is defined as the ability to give an estimate to arbitrarily small queried precision: for precision epsilon, pick 2^(-c) < epsilon and run our algorithm above for just the first c steps; then pass on the value if it returned, or we can return zero and that's fine as we are within epsilon of the true A_M. So, A_M is computable whereas B_M is not.

The trick then is to observe that we can construct theta-functions with arbitrarily small values but constant-sized derivatives; so by summing a bunch of these together we get a function whose value at 2^(-M) behaves like A_M and whose derivative at 2^(-M) behaves like B_M.


Ah, that makes sense. Thanks!


> If it really is an example of an uncomputable physical process

I guess first you'd have to prove that there are physical processes are actually continuous, and that this continuum is genuinely the continuum of the reals.

It is already known that a computer with access to real numbers with infinite precision can perform hypercomputation, I'm not sure how much this is relevant, but:

https://en.wikipedia.org/wiki/Real_computation


The continuity of physical processes is not something that can be proved. It is an assumption that is included or not included among those on which a mathematical model of the physical processes is based. The suitability of any such model is based on the comparison between its predictions and the experimental results.

In standard physics, the space and time are continuous, and nothing else. All the physical quantities that are derived from length or angles or time are continuous (this includes quantities like mass and energy, which are derived from time). Besides those, there are no other physical quantities that are continuous.

The mathematical models that assume the continuity of space and time match extremely well any experiments and for now there exists no evidence whatsoever that space and time are not continuous.

Besides the standard physics, there have been a few attempts to create mathematical models of the physical processes where the space and time are discrete. Nevertheless, these mathematical models did not succeed to demonstrate any advantage over the standard models with continuous space and time.


> The continuity of physical processes is not something that can be proved.

I agree with you, but is there any peer-reviewed publication that can be cited? The idea makes sense to me, firstly the Reals \ Inaccessible Reals = Computable Reals, secondly you can't ever input an inaccessible real to an experiment nor retrieve one out of an experiment -- but then I'm not completely certain in making the conclusion that no experiment can be devised which shows that inaccessible reals exist in physical space.

I am concerned about this in the field of complexity analysis of quantum computers too, I think that the use of reals in physics is leading to mathematically correct but non-physical results about complexity theory of quantum computers. Having a paper to point at and say "look, stop assuming your Bloch spheres are backed by uncountable sets, it's leaking non-computable assumptions into your analysis of computation" would be helpful.


> It is already known that a computer with access to real numbers with infinite precision can perform hypercomputation

I took an interest in this area once, and this is not true. "Computable analysis / computable topology" says quite the opposite.



No. There's literally an article on Computable Analysis, and it details Type Two Turing Machines.


> [UPDATE] Let me put this more succinctly: how is it possible that there can be a continuous computable function whose derivative is uncomputable? What exactly is it about this function that makes numerical differentiation fail?

Like this

f(0) = 1

f(x!=0) = 0

It's more of a mathematical trick than anything else, it's not a "natural function" let's put it this way


Equality over computable reals is not decidable, so I don't think you've actually defined a computable function here. The issue is much more subtle than that.


> Big News

The paper is from May 2022. Don't you have the answer to whether it's big news? Whether the news is big is (usually) apparent/assessed when it is new.


Two years it not that old, and the topic is obscure enough that it could plausibly have gone unnoticed. Bell's inequality went virtually unnoticed when it was first published.

My money is definitely on parlor trick, but I would still like to understand why.


At a glance, this paper seems extremely questionable. It appears to take an offhand comment by Kahneman in a Lex Friedman podcast regarding the limits of AI and interpret it as asking whether there is a problem that humans can solve but machines cannot -- which is definitely not a reasonable interpretation IMO as in context Kahneman is talking about the limitations of present AI systems, and computability is a totally different question. The paper's claim to have found such a class of problem would mean the Church-Turing thesis is false, and so it is quite surprising that the Church-Turing thesis and existing discussion around it does not appear to be mentioned or cited. Anyway, the paper constructs at some length a non-computable problem and then declares, "Nevertheless, a human can certainly solve this problem by creative work." While I haven't gone through the construction in-depth to see whether that's reasonable, this certainly doesn't come across as a very rigorous proof. Also, in the introduction, it makes the bold claim that "digital computers ... are able to compute with finite sequences of rational numbers only," (and suggests that analog and quantum computers lack such limitations) which is certainly false as can be seen by programs such as Mathematica that easily solve many systems involving irrational and transcendental numbers.


This reminds me of this reddit question https://www.reddit.com/r/askmath/comments/1eakt5c/if_you_pic... "If you pick a real number from 0 to 1, what is the probability that it is rational?"

and the first response, " With a uniform random process over an interval (a, b), you can find the probability of picking an element from a set X by integrating 1/(b - a) over the set X.

Since the (lebesgue) integral of the rational numbers is 0, the probability of picking a rational number is 0. " And that kinda broke my brain for an afternoon.

So, when a computer represents a number, it's always rational. You can have a lot of memory, but always finite.

Now, you can go further, and represent things symbolically, like sage math or a bunch of other systems. But (I believe) at the end of the day Godel always wins. There are always true statements that can't be proved. If I remember right, computable stuff is a subset (maybe improper?) of provable stuff.

Looks interesting. Thank you for posting it. I'll try to read it later. Although, I'm a math tourist. I probably won't be able to give you any grand insight.


Sets of measure 0 that are not empty or are even infinite are indeed mind-bending when first encountered. But a set of cardinality 1 may be fun, too.

Take a real-valued stochastic processbS, and an arbitrary time t. The process generates a value, v = S(t).

Since the width of the range from v to v is zero, the probability that S generates this number is also zero.

We can replace it with a physical stochastic process. Say, take the voltage in your electric socket. It fluctuates randomly around the nominal value. At every moment there is some voltage in the socket, and the probability to encounter this voltage is zero. It looks like the impossible is actually happening at every moment!

This is, of course, thevprobability being a chance to encounter a given value. If we pick a value first, and then start sampling our stochastic process, there's zero probability that we're going to encounter our (infinitely precise) number, no matter how long we'd try.

(A physical voltmeter has a finite resolution, say, 0.10 V, so it's going to show you a predetermined voltage pretty often. But it would be a rounded approximation of the unpredictable real value.)


You can provide definitions for irrational numbers, but they'll be expressed in terms of infinite serieses (what the hell is the plural form of series?).

So, if ever you try to expand them, you'll need to truncate at some point.

There will however, be situations where you can perform operations directly on the serieses, and in those cases (but not the ones requiring expansion and truncation) you'll be able to perform the operations losslessly.

Of course, no one would ever bother with that - even NASA truncates pi to 3.141592653589793 (https://www.jpl.nasa.gov/edu/news/2016/3/16/how-many-decimal...).

(And anyway, the distance from any irrational number to the 'nearest' rational is always infinitessimal, so it's generally a moot point unless you're doing calculus.)


> what the hell is the plural form of series?

Series. It's from the Latin 5th declension (res, dies, species, etc.) for which the nominative singular and plural are the same, which carries over into English.


Well, that's annoying if you need precision of meaning in a discussion.


In Latin that was seldom a problem like in English, because these nouns were seldom used in the nominative case, i.e. as agents of verbs.

They were used much more frequently in other cases, like accusative (singular seriem, plural seriees).

In general the fact that English and many other languages borrow the Latin nouns in their nominative form, because of the rather useless tradition that this is their dictionary primary form, can be considered as a serious mistake, which frequently makes awkward the use of such words.

The termination "-s" does not have anything to do with the meaning of the word "series". It is just one of the most frequently used markers for the agent of a verb. Even if it is written without delimiting spaces after "serie", it does not belong to this word any more than for example the word "at" in "at home" would be considered to belong to "home". Like "home" can be encountered in various contexts, e.g. "at home", "in the home", "to the home", "for the home" and so on, "serie-" can be found in various contexts, e.g. "serie-es", "serie-m", "serie-ebus", "serie-ii" and so on.

What corresponds in Latin to an English word is the Latin word stem, without any of the terminations that may be attached to it. The Latin terminations correspond in English to either explicit prepositions or to implicit role markers provided by the fixed order of the English words.

When a Latin word is borrowed into English, it should have been borrowed as its bare stem, not together with its nominative termination. Moreover, for the nouns that denoted inanimate things, the nominative was rarely used and the accusative case was the main form. Even for animate but non-human beings the most important case was still the accusative. This is the reason why in the Romance languages most of their nouns correspond to the accusative forms of the Latin nouns and not with their nominative forms.

If the Latin nouns were borrowed as bare stems, there would never be conflicts between their singular nominative marker "-s" and the English plural marker "-s".


That's a bit like using a steam hammer to crack a nut.

A much simpler way of thinking about it is:

What is the chance that if I pick an infinite number of random natural numbers between 0 and 9 that all of them are 0?

You're missing the repeating fractions but it gets the flavor of what's actually happening across much better than a Lebesgue integral.


Hmm... if I pick an infinite number of random natural numbers between 0 and 9 and all of them are 0, can you prove they aren’t random?

https://www.americanscientist.org/sites/americanscientist.or...

(I kid, I kid)


I got this one without realizing the complex basis for it... there's an infinite number of real numbers between 0 and 1, compared with a limited proportion of rational numbers.


There's an infinite number of both.

A way to intuit why you have a zero probability of choosing a rational is to consider randomly picking digits one at a time. Suppose you start with a "3". What is necessary to get one third as your random choice? You must now uniformly randomly select an infinite number of 3s. You can think of that as just a contradiction without much loss here, in that selecting an infinite number of 3s is infinite proof that your process must not be random.

(Putting rigor on that might be a challenge but I think it's a fine intuition.)

This argument similarly holds for any rational, because no matter how large any given rational's repeat may be, it is always finite, and in order for that rational to be your choice you have still fully determined the result of a putatively uniformly random process for an infinite number of exact picks, and if it is truly random, the probability of that process producing the exact necessary repeat infinitely is always 0.


That argument works for any number. The probability of picking any particular number is zero, there's nothing special about rationals from that perspective. This has no bearing on the measure of the set as a whole.


Which is why I very, very explicitly presented it as a way to gain some intuition and not as a rigorous mathematical argument. That wasn't an accident.


The probability of picking a single number can't be used to determine the probability of picking a set. You can't build intuition this way.

This is almost exactly the same thing as saying integrating x^3 from minus infinity to plus infinity is zero because each zero width line has zero area, so adding them all up is zero too.

Gives the right answer by coincidence, but also works to show incorrect results.


That argument proves too much: you similarly will never pick Pi, or any other irrational. And yet, the probability you pick some irrational is 1.

In fact, the probability you pick an uncomputable number is 1, too.


I agree with you, except your claim that it proves too much. You will indeed never pick pi or any other specific number you can name. I don't know if you were trying to say my argument leads to those results you consider absurd, but they are in fact the correct results.

One way of interpreting this result is to say that randomly selecting a real number isn't physically meaningful and this does not fit into our human brains well. You can do math on the "random number", but you can't ever actually have one in hand (as a sibling comment to yours points out correctly, it is guaranteed to be uncomputable as well), not even mathematically, so you should expect counterintuitive results. Which is generally true of the real numbers when you get close enough to them anyhow. They are popular for a reason but they have rather more spiky bits and strange behaviors than one might expect from a number called "real".

I think both of you may be thinking of "random number" as something like what you get out of a RNG or something, but random reals are much stranger than random ints from a finite range.


No, I mean that if you start with "you won't land on any given rational" and sum that up to mean "you land on any rationals", that argument also seems to prove that you can't ever land on any irrationals either, so as an intuitive argument it doesn't really point toward why you'll always land on an irrational and not a rational.

("You'll never land on Pi because the odds of picking out the infinitely many digits is zero"- repeat that for every irrational, and naively sum that up, and it looks like you'll never land on an irrational either, which is not true.)


"that argument also seems to prove that you can't ever land on any irrationals either,"

It does not.


Suppose you construct a fat Cantor set of measure 0.5. You then select a random real from [0,1]. Similar to the rationals, any particular number in the fat Cantor set has zero chance of being selected. Does that mean that the chance of selecting a member of that set is 0?


But there must be many more of the one than the other, since one has to satisfy a constraint and the other does not.


EDIT: Honestly, just read the wiki article on Cantor's Diagonal Argument[0].

[0] https://en.wikipedia.org/wiki/Cantor's_diagonal_argument


From the abstract, "simulations of such systems are usually done on digital computers, which are able to compute with finite sequences of rational numbers only."

Not at all! Digital computers can use computable reals which are defined as any function from a rational value (the error bound) to a rational value which is within the supplied error-bounds from the true value. Do not mistake this for computing in the rationals, these functions which perform the described task are the computable real numbers. There are countable-infinity many of these functions, one for each computable real. For instance, note that you can always compare two rationals for equality, but you can't always compare two computable reals for equality, just like reals.

Hans Boehm (of Boehm garbage collector fame) has been working on this for a long time, here is a recent paper on it: https://dl.acm.org/doi/pdf/10.1145/3385412.3386037


Most fractal functions don't have a well-defined derivative. If the length of the fractal curve increases indefinitely as you zoom in, the concept of a local slope is not meaningful. More direction changes appear as you zoom in.

Is there more to this result than that observation?


I haven’t read the whole article yet, but I’m guessing it boils down to a Halting Problem?

And I assume if a Turing machine can’t, neither can a human. If that is not the case, something smells fishy in Denmark.


why is it impossible to decide whether x<0, x=0, or x>0 as in Example 1?


This is better known as the Table Maker's Dilemma.

Say you have some computable number p, that means you can compute a (rational) approximation p' to p within any given tolerance eps > 0 (i.e. you know |p - p'| < eps). To determine whether p > 0, p = 0, or p < 0, you compute an approximation p' to a certain tolerance eps. If p' > eps then you know p > 0, if p < -eps then you know p < 0, otherwise you need a better approximation... Without further knowledge about p, there is no point where you can assert p = 0.


Thank you for the response. (I assume you mean "p' < -eps" in the second conditional?)

How often are rational approximations computable within any given tolerance?


Only x=0 is undecidable. It's because you have to check an infinite number of digits past the decimal point to see if all of them are zero, and that will not halt.


> Only x=0 is undecidable.

That cannot be true, because if both x<0 and x>0 are decidable then x=0 is also decidable.


Equality of real numbers is undecidable. Specifically recursively undecidable, which is the same as uncomputable.

The Real numbers are topologically weird when you dig into it.

Random example.

https://mathworld.wolfram.com/RichardsonsTheorem.html


I agree that given two computable real numbers x,y, asking whether x=y is undecidable.

The point I'm making is that it can't be true that x<y or x>y are decidable, because then x=y would also be decidable, and it isn't.


A computable number is real number that can be computed to by a finite, terminating algorithm.

x<y meets that definition because you can compare digits until one doesn't match.

The same can be done for not equal, testing until there is a difference.

Real real numbers are infinite precision.

Showing that a number is not, not equal always requires you to normalize, truncate etc...

No matter how many digits you test, there is an uncountable infinity of real numbers between you epsilon.

That means you do not have a finite terminating algorithm.


Is x < y decidable or semidecidable?

If x and y are the same number, then a Turing machine that compares the digits of x and y to determine whether x < y will never terminate. Doesn’t this mean that x < y is not decidable?


A Turing machine as an infinite long tape, and even if it takes 10 times the time to the heat death of the universe x<y will halt.

Finite time is not practical time.

Given two Turing machines A and B approximating numbers a and b, where a ≠ b, you can use an epsilon to prove a > b, that is 'sufficient' for a proof.

You are trying to convert that one sided, semi-decidable 'a>b' into a two sided a>b f Id yes and a≤b is no.

That is undecidable.

Even if you change the ≤ into <, which does imply the exitance of b. It doesn't prove equality.

You have to resort to tactics like normalization for that.

Type theory is probably a good path to understand why. You seem to be coming from a set perspective and you will run into the limits of ZFC that way.


> A Turing machine as an infinite long tape, and even if it takes 10 times the time to the heat death of the universe x<y will halt.

> Finite time is not practical time.

Sorry for my daftness but if x and y are both a transcendental number, for example pi, then x and y have infinitely many digits so the Turing machine that determines whether x < y is true will not halt, even with an infinitely long tape, or will it? What am I misunderstanding?

If you could recommend a source that explains the basics of these arguments, I’d appreciate it.


Pi is transcendental but computable.

For an introduction to real real analysis is an old book typically called the 'Baby Rudin' Principles of Mathematical Analysis by Walter Rudin.

It is tiny and expensive, as most college math text books are once you get past a certain level. No idea if there is something better now, it is from the 1950's and there are used copies in most used book shops for cheap (or there was). It has no definition of 'equals' but does have a concept of equivalent relations.

Dummit and Foote's Abstract Algebra book is a great reference, especially with the problem with AI pollution. Coming from a General Relativity background, it still took me two years to work through on my own.

I don't know any suggestions on a more modern approach of starting with type or category theory.

I am not really interested in constructivist math, it is a problem to get around for me, so maybe someone who is can chime in on a better path.

While not rigorous, this paper does a good job at explaining just how muddy the concept of 'equality' is.

https://arxiv.org/abs/2405.10387


What if you want to decide if x<y when actually x==y? Then your comparator doesn't halt.


You need to move past decimal expansion, there are sufficient methods to prove one sided limits on the reals in finite time.


Haven't seen the paper, but I'd guess the argument is something like "if the domain for x includes 0, then [edit: whenever x = 0] the Turing machine will get into an infinite loop looking for the first inverted bit[1]. _None_ of the three states will ever be provably true or false. If the domain _excludes_ 0 then you can prove that the machine will hit a non-zero bit in finite time, thus terminate, and either x<0 or x>0 will be shown true. But if you exclude 0 then you've also shown that the =0 state will never be true."

[1] multiple zero bits and a one bit is >0, multiple 1 bits and a zero bit is <0. Infinite zero bits (even if followed by a one bit) is +0, infinite 1 bits (even if followed by a zero bit) is -0.


An exact comparison is undecidable. You can only say if the numbers sufficiently differ. This is equivalent to saying that x > 0 + ε or x < 0 - ε, where ε > 0. If you represent numbers in binary, ε = 2^(-n), where n is the maximum number of bits you allocate to represent a number. Obviously you have n < ∞ in any finite machine.


Nope. You're thinking about this mathematically, not algorithmically. The question is essentially to decide whether the output of a TM is all zeros or whether it will eventually output a 1. If it outputs a 1 you will know that in a finite amount of time by running the program, but if it never outputs a 1 then you can't tell that by running the program, and it's impossible in general to tell any other way because that would be tantamount to solving the halting problem.

Here is a concrete example: write a program that systematically tests all numbers for counterexamples to the Goldback conjecture. Every time a number fails to be a counterexample, output a zero, otherwise output a 1. Consider the output of that program to be a real number expressed as a binary decimal. If that number is not zero you will know in a finite amount of time, but if it is zero you will not know that until someone proves the Goldbach conjecture.


The order relation on computable reals is undecidable - this is well known, and the parent's argument is correct. If both x<y and x>y were decidable then you could easily decide equality too by just checking them and seeing if the answer to both was "no".

Computability theory is mathematics.


Tracing back through this thread, the term "deciable" was introduced in a very unfortunate way:

ekez: why is it impossible to decide whether x<0, x=0, or x>0 as in Example 1?

lisper: Only x=0 is undecidable. It's because you have to check an infinite number of digits past the decimal point to see if all of them are zero, and that will not halt.

teraflop: That cannot be true, because if both x<0 and x>0 are decidable then x=0 is also decidable.

Both lisper (me) and teraflop are correct. The problem is that inequality is not decidable, it's semi-decidable. It's teraflop who introduced the word "decidable" into the discussion, not me, but I didn't pay enough attention to that at that time.

/u/scarmig got it right in this comment: https://news.ycombinator.com/item?id=41150519


Right, I agree that they are semidecidable, thanks for clarifying. My interpretation of "only x=0 is undecidable" was that you were claiming the other two were decidable (which is a reasonable reading imo). But it sounds like we don't actually disagree here.


It's my fault for not being more explicit. I just didn't think it all the way through.


> Nope. You're thinking about this mathematically, not algorithmically.

I don't know what you mean by this, and I'm not sure how it relates to what I said. I'm using "decidable" in the strict, computer-science sense of the word.

The statement in example 1 of the paper, which we're discussing, is about computable real numbers. A computable real number is one for which there is a Turing machine that can calculate it to any desired finite precision in finite time.

A semidecidable problem is one for which there is a program that halts if the answer is "yes", but may not halt if the answer is "no". The halting problem is in this category.

Given a computable real number x, asking whether x<0 or x>0 are semidecidable but not decidable problems.


Yes, you're right. See https://news.ycombinator.com/item?id=41154273 for my post-mortem.


x > 0 and x < 0 are both semidecidable. Which is to say, they can be decided, but their converses cannot. So if either of those are true, they will halt (by running them in parallel, and halting once one halts), but if neither of them are, it may not halt.

If you remove 0 from possible inputs, you no longer need to worry about that possibility, and the problem is then decidable.


All models are wrong. Some (wrong) models are useful.

Computers can't represent 0.1 (in floating point), yet that hasn't stopped anyone from doing finances on their computer.

I don't think this is big news OR a parlor trick. It's just some obscure thing computers can't do that nobody has noticed for 70 years because nobody needed it.


> Computers can't represent 0.1 (in floating point)

Of course they can. They just can't do it in binary [EDIT: base 2]. But they can certainly do it in (say) BCD [EDIT: base 10].


.NET’s decimal datatype for example represents a decimal floating point number


Just as a total pedantipoint, they also do that in binary (hence the B in BCD, in that specific case).


I've updated my comment with appropriate pedantedits.


But BCD is not floating point (generally shorthand for the IEEE 754 Floating Point standard, which nearly every CPU and GPU have hardware support for). And I don't know much about BCD, but it is probably missing niceties like NaN and Inf for capturing edge cases that happen deep inside your equations. These matter if your system is to be reliable.


> generally shorthand for the IEEE 754 Floating Point standard

Yes, generally, but that is just a social convention. There is nothing stopping you from doing floating point in base 10 rather than base 2, and if you do, 0.1 becomes representable exactly. It's just a quirk that 1/10 happens to be a repeating decimal in base 2. It is in no way a reflection of a limitation on computation.


IEEE 754 defines decimal floating point in single (32 bit), double (64) and quadruple (128) precision since the 754-2008 revision. (However .NET’s type I mentioned above even though it’s 128-bit is its own thing and does not adhere to it).


> Computers can't represent 0.1 (in floating point), yet that hasn't stopped anyone from doing finances on their computer.

floating point for financial data may have made sense back when my 386 DX CPU has a FP coprocessor and computation were dog slow.

In this day and age though you'll typically be not just a bit but much better using an abstraction that can represent numbers with decimals precisely, which frees you from a great many rounding error, epsilon computation, error propagation, etc.


[flagged]


This paper is not unremarkable. It's actually quite a surprising result. The only question is whether it's earth-shattering, or just a parlor trick. My money is on parlor-trick, but I don't understand the math well enough to be confident of that. I posted this here hoping to find someone who could shed some light on this question.


It is unremarkable - it has 5 citations since May 2022. I have papers with more cites from this year.

Edit: it's a parlor trick because they invent a decision procedure (bounds on maxima) and then prove something based on that premise:

> It will be shown that non-computability of the first derivative is not detectable by a Turing machine for two concrete examples

I wonder if they rigged the game hmmm. In general people do this kind of thing all the time to get published (build both a technique and a foil for their technique) and it's always a weak paper.


1. https://academia.stackexchange.com/questions/37021/why-is-it...

2. > it has 5 citations since May 2022 ... I have papers with more cites from this year ... In general people do this kind of thing all the time to get published

People also play games to get citations.


[flagged]


Your original response of "it has few citations so it's unremarkable" was also an NPC response, which I wouldn't have commented on except that you're calling other people's responses NPC responses.


> Your original response of "it has few citations so it's unremarkable"

I'm befuddled. it's right there at the root of this string of comments: my original response was "it's unremarkable because it's niche in a niche area". it's only when someone asked why, i responded with citations and etc. so my original comment was not NPC according to your assessment, and it's only in response to someone reasserting that i had to cite something.


You should take a look at https://news.ycombinator.com/newsguidelines.html for some suggestions re: "NPC response".


The false decorum/hypocrisy is palpable: I don't see how a dismissive/flippant link to a generic stack exchange post is any better in terms "quality response".


It's not a personal attack via crappy internet trope, that's the hopefully-now-obvious difference.


Sure but it's just as dismissive. At least my response included more reasoning in addition to the crappy Internet trope.


You can't escalate to edgelordery just because you read something as dismissive, at least not on HN. Or in npc-jargon terms, Snowflake Edgelord is not a valid alignment/class combo.


Sure, but rings off to the ear: this 2 year old paper saying it proved a Turing machine could say if a Turing machine could enter infinite loop given a very particular problem class could possibly be big news, earth-shattering news, I need someone else to tell me if bounds checking is involved. thank God it made it to the front page, I submitted it so I could get someone else to tell me


Tbf you can always ask an LLM to explain it /s




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: