Hacker News new | past | comments | ask | show | jobs | submit login

A Turing machine can compute the square root of two exactly. It can't represent it exactly as a fixed point number, but that's not the only way to compute something. There's a pretty standard definition of computable numbers [1] and it includes the square root of two.

[1] https://en.wikipedia.org/wiki/Computable_number




Sure, note I'm not trying to say the position being argued is right, I am simply trying to describe it as well as I can understand it.

One might say that a "classical analogue" computer that "outputs" line segments could output the square root of two but an digital computer could merely "represent" it.

That could be pure sophistry, especially since such classical analogue never have real infinite precision. But it seems like that's the sort of thing being claimed here.


I think even the ability of a (even non-digital) computer to output the square root of two isn't easily asserted. Quantum theory doesn't play nice with infinite precision.

- You can't output a voltage of sqrt(2) because you output a finite amount of electrons in each timeframe, so your resolution is limited.

- We don't know if you can output a signal of length of sqrt(2) because we aren't sure yet whether time is infinitely divisible [1]

- You might be able to position gears or other physical objects with infinite precision, but reading the value with a higher precision than one plank length seems impossible [2]

1: https://en.wikipedia.org/wiki/Chronon

2: https://en.wikipedia.org/wiki/Planck_length


I'm having a hard time understanding the difference between outputting something and representing it. Can you explain the difference, or is that just your best understanding of the argument here?


The point is that we can represent sqrt(2) symbolically (as I just did there) but it cannot be represented as a binary (or decimal) expansion.

Computers can also work symbolically rather than based on some direct binary representation. I think equality and especially comparison become more difficult to determine, but it remains possible.


But this merely a product of the choice of representation.

Here is a fully expanded representation: 1.

Of course my choice of base is a bit exotic, but the point is that no matter what, we're working on symbols. We're just used to seeing some as more direct analogues of numbers than others.


What's a direct binary representation? If that means having each digit explicitly written out, no model of computation could possibly do that in finite time. If it means having the ability to output arbitrary bits in finite time, Turing machines can do that.


It's probably easier to give an example of a number that I can't represent symbolically. However, I can't give you a specific example, since if I uniquely define a number then that definition is itself a symbolic representation of that number. But here is an example: a uniformly random real number on [0, 1]. Or, computationally speaking, uniformly sample an infinite number of Bernoulli random variables, and write down the binary representation. Of course, this process takes infinite time, which suggests that they are not representable.

For a proof, consider the fact that every representable number, by definition (I know I haven't given a formal definition), is represented by some finite string. But the number of finite strings is countable. So it must be a measure zero subset of [0, 1], so a random sample is representable with probability zero.


Or put differently:

How long is that piece of string?

a) Here, have a look at it. It's exactly that long.

b) 4.51 inches

both answers are correct. "a" is correct by definition. "b" may be totally correct if we add a precision to the answer, like +- 0.01 or whatever we find true.


How long is that piece of string?

With how much precision can you measure it?

With how much precision can you describe it?

The answers are pretty similar for humans or turing machines IMO.


An output would be a physical extent in space (a line of some length, the turning of a dial, etc), a representation would be an description.


Under that definition, can Turing machines produce output at all? It seems that the Church-Turing thesis is about representations and not outputs.


The original formulation of the machine just operates on a tape, the closest thing to an output it has is moving the tape into a particular state (such as writing down the digits of the calculated function on successive cells of the tape). But there's no "output" function to spit out a number. You can just tack that onto the specification if you want, but it doesn't change the power of the device really.


I'm specifically asking about joe_the_user's definition of output as being a physical object rather than a number.


I don't think that's technically correct; I think it would be more correct to say that, for a given natural-number N precision, a Turing machine can calculate sqrt(2) to N precision, and there is no upper bound on N after which a Turing machine can't do it anymore. That's not the same as computing "the exact square root of 2" (only Chuck Norris can do that, of course).


The square root of two cannot be represented exactly in a finite decimal representation, but there are other ways to represent it exactly in finite space. We both know precisely which number we're talking about, despite using computers to represent it.


But "exactly representing it" just cashes out as "being able to answer questions to arbitrary precision about its value", and yep, TMs can do that do that, in finite space.


Doesn't computable numbers, from the reference Wikipedia article, only state they can be computed to a given nth digit? Thus, in the OP's example, the rule-and-compass construction is exact whereas the Turing machine could only approximate the number (unless given infinite time, which I assume is out).


We need a digital representation of real numbers. One possible choice is: a real number r is represented as the source code of a function which takes a natural number n to the nth digit of r. This seems no more like “cheating” (and to me, it is less like cheating) than to represent a real number r by a line segment of length r.


That can't represent most real numbers, although I don't know that we have actually established that physical lengths can.


Both representations have the feature that it's impossible to check whether two real numbers are equal.


Actually with the physical objects you could use one to occlude the other and test for light passage


Not with infinite precision.


Importantly, this is actually true of the real numbers.


I don't think I believe in "actual" real numbers.


The catch is that you can't actually measure the rule-and-compass construction with infinite precision. You can perform more operations on the construction and then measure later to verify that it produced the correct result, but so can a turing machine acting on a mere "representation" of the number.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: