Hacker News new | past | comments | ask | show | jobs | submit login
Gödel and the limits of logic (2006) (maths.org)
148 points by Pishky on March 13, 2017 | hide | past | favorite | 39 comments



I highly recommend getting Godel's proof[1] and reading it. It's an amazing journey and quite understandable from the start (the first half of the book is introduction), and I've never been able to read proofs very well. It takes concentration, but once you get it (at '17 GenR') it's almost like a symphony going off. The crystalline brilliance of the "System P" decomposition is worth it just to see how math as a process could be reduced to a such simple, clear and concise set of symbol manipulations...and then the rest shows how that could be used topple itself. Incredibly philosophically insightful.

[1] https://www.amazon.ca/Undecidable-Propositions-Principia-Mat...


I can't speak for Godel's original proof since I haven't seen it before, but I certainly found Computability and Logic [1] to be pretty approachable. It's the textbook used for UC Berkeley's "Intermediate Logic" philosophy course (in other words, it's so easy even philosophy majors can understand it! :P).

From my understanding, C&L diverges from Godel's original proof technique in order to make it easier to follow, but it's much more rigorous and explicit than what you'd find in Godel, Escher, Bach or something. It's still a textbook.

[1]: https://www.amazon.com/Computability-Logic-George-S-Boolos/d...


I try to read Godel's Proof once a year or so. You might also be interested in Gregory Chaitin's Meta Math! The Quest For Omega, which relates the halting problem (among other things) to Godel. A word of warning though, the book is rather idiosyncratic, as you might have seen from the title.


Got it years ago and loved it. Have to give it another read some day.


A light take on the subject can be found in Logicomix: https://en.wikipedia.org/wiki/Logicomix - a cool blend between the dry topic and comics.


Thanks for the recommendation. I found it on scribd if anyone is interested: https://www.scribd.com/document/98921232/Bertrand-Russell-Lo...


Are there any Gödel-like problems that does not involve self references?


Yablo's Paradox [0], possibly [1].

Regarding [1], "circularity" in an uncountably infinite context seems different than "self-referential", though the latter is usually geometrically analogized as the former. I tentatively consider Yablo's Paradox to demonstrate that an ineradicable cycle of alternating truth-assignments is equivalent to an infinite (>= aleph-one) regression of alternating truth-assignments, and thus that circularity does not map coherently to self-reference in all logic systems.

[0] http://www.mit.edu/~yablo/pwsr.pdf

[1] http://ferenc.andrasek.hu/papersybprx/jcbeal_is_yablo_non_ci...


Yablo paradox is so cool.

    Imagine an infinite sequence of sentences S1, S2, S3, ...,
    (S1) for all k >1, Sk is untrue
    (S2) for all k >2, Sk is untrue
    (S3) for all k >3, Sk is untrue
    ...
Perhaps another argument in favor of regarding infinites as a logical fallacy. Assuming our universe is finite, all objects in the universe are finite, including sets. Yablo's set chain ends at some N, possibly very very very large. Sentence N is true [there are no further sequences], all other sentences are untrue, as there exists a true sentence with k > i: the Nth sentence.


Infinities can be handled safely in logic. You just have to be careful about them.

There may not be physically real infinities. But there are no physically real perfect squares either (assuming an analogue universe). It's a simplified model.


> But there are no physically real perfect squares either (assuming an analogue universe).

How does analogue vs digital (assuming you mean that analogue) matter here? You'd also want to specify which "digital physics" you want to rely on.

We don't need to think along those lines to consider a physically real idealized geometrical object.

Here I'll recast your square as a planar slice through a real object in our (3d+1d) universe.

With reasonable assumptions about the local behaviour of spacetime, you can't construct a physically perfect square because of the small-length-scale behaviour of matter rather than that or the presence of infinities of infinitesimals. The assumption here is that locally around the square the spacetime is Minkowskian, which for a small and light square is practically guaranteed everywhere outside the event horizons of black holes and far from the hot dense phase of the universe. "Practically guaranteed" here means that in spite of plausible theoretical objections to the perfectness, there exists no mechanism to show that the object is not a perfect square (i.e., such objections are conjectural physics leaning hard on a purely mathematical argument).

"Digitalization" and discretization arguments run into observational problems right away.

The wavelength of light is not quantized, for example, and is not clearly bounded in the infrared. (In the ultraviolet you get pair production and gravitational collapse, however.) While one could make an argument that at the emission event of every photon there is not an infinity of available photon wavelengths, you would have to forbid infinitesimal spacetime intervals to avoid emitted photons stretching into arbitrary wavelengths under the metric expansion of the universe, and the only practical way of doing that is through discretization (with or without emergence).

There is a substantial weight of evidence that suggests that if spacetime (or whatever it emerges from) is discretized, the discretization length is not significantly above the Planck length; notably gamma rays, x-rays, light and radio from violent astrophysical events do not appear to win races against each other in an ordered fashion.

So there may be infinitesimal spacetime intervals, and so any quantity that depends on spacetime intervals has an infinite set of values to choose from.

Absent a description of gravity-matter interaction when matter's small-scale behaviours are relevant, we cannot really preclude the possibility of building a perfect square in an unreasonable (but nevertheless physical) local patch of dynamical spacetime, such that the microscopic behaviour of the latter offsets the odd quantum mechanical contributions to the square. At best we can try to place limits on how long such a system could survive (it could last a very long time in the cold sparse phase of the universe).


> How does analogue vs digital (assuming you mean that analogue) matter here? You'd also want to specify which "digital physics" you want to rely on.

In the kind of "digital physics" where you have a fixed grid it would be possible to create a truly perfect square, that's all I was referring to. (I'm well aware that such physics would be incompatible with Lorentz invariance and thus unlike our real universe).


> In the kind of "digital physics" where you have a fixed grid it would be possible to create a truly perfect square

Discretization is not really the problem: you instead have to fix the definite position and momentum of the microscopic components of the square such that all the parts of it stay in place relative to one another and to the total arrangement of the shape they collectively form.

Switching to a taxicab geometry doesn't change the tendency of matter to move around; it only defines the points in which matter might be found in principle, and where it can never be found, ever.

In effect, the dynamism of spacetime is a red herring (we have difficulties building ideal spheres in flat space, theoretically speaking), and spacetime geometry (in the sense of minimal lengths vs infinite differentiability) is at best a marginal contribution to the problem.

While Lorentz invariance is essentially what blocks the quantization of energy, you can still have a discretized spacetime that preserves Lorentz invariance. One broad approach to that involves a Lorentz contraction on the spacings between the allowed points, so that observers may disagree on the lengths of objects marked off in Cartesian coordinates on the fundamental lattice spacings. In particular, even in the absence of gravitation, one observer will conclude a small object has a definite lattice length while a generic accelerated observer (and at least some boosted observers) will conclude that the same small object's length is in superposition.

Additionally, one may quantize lengths and times as in a Snyder spacetime, and still recover Lorentz invariance (or at the very least do away with preferred frames; things are tricky where boosts are high and ultimately it is likely impossible to recover the Lorentz subgroup in its entirety) in a quite natural fit with a deformation of Special Relativity.

Unfortunately we still have GR with its strong successes and several viable models which have minimum length scales. So at least at the energy limits we have access to now, we can't really make a concrete choice about which better describes our universe. In part that's because almost all these minimal-length theories are carefully designed to reproduce General Relativity in some limit because of its strong successes.


Not trying to argue here, just learn more.

This Yablo's paradox seem self referential to me. The statements are making claims about a series of statements of which they themselves are part of. I see the statements only making claims about subsequent statements... but still.

How about, rather that banning "self references", you have a more carefull phrasing requiring all structures to be fully and independently defined before you start asking questions (or making claims) about them? (No side effects from the question please). Does this avoid all Goedelish paradoxes?


It depends what you mean by a Godelish paradox. Godel proved that there are statements which are true, but unprovable. To me, that's quite a paradox, and there's no escaping it.


By Goedelish I mean all the stuff that are really just concretizations and corollaries of incompleteness.

You can escape incompleteness with less powerfull axiomatic systems. But then you can't define the set of integers. (correct me if I'm wrong).


Hmm, maybe I just answered my own question. You can't define the set of integers without using elements of the set of integers. Like Peano's successor function. And if you have that power, you can state paradoxes. Like the above mentioned Yablo's paradox. ?


Presburger arithmetic[0] is powerful enough to describe the natural numbers with addition, yet still consistent and complete. On the other hand, Robinson arithmetic[1] lacks induction, but axiomatizes addition and multiplication, and that is sufficient to make it incomplete. Thus, the problem is not that you can describe the set of natural numbers, or induction, but the interaction between addition and multiplication.

[0] https://en.wikipedia.org/wiki/Presburger_arithmetic [1] https://en.wikipedia.org/wiki/Robinson_arithmetic


There are a few ways around undecidability. One is finiteness. If you have an upper bound, or addition cycles around to zero, there are a finite number of states. Halting is decidable for deterministic machines with finite memory - eventually it must either halt or repeat a state.

(An amusing practical solution to the halting problem is to run two interpreters in parallel, with one running twice as fast as the other. If their states match after starting, the program is in a loop. This trick was sometimes used in the batch computing era to catch beginner student programs that were in a loop before they wasted much CPU time. Programs with long cycles wouldn't be caught, but that was usually not the problem with beginner programs.)


The thing is, what does it mean to "work around"? It is true that the halting theorem in its classical formulation can be "worked around" like this, but the classical halting theorem is not the one that matters in practice. What matters in practice is what's sometimes called the bounded halting theorem (which lies at the core of the time-hierarchy theorem), which roughly states that you cannot tell whether a program will halt on input x within n steps in fewer than n steps (and a corollary is that you cannot in general extrapolate information from one input to another). Bounded halting -- which is proved using the same diagonalization proof as the "classical" halting theorem -- is pretty robust. It holds for total languages and even for finite state machines.

So what really bothers us about the halting theorem is that -- in general -- you cannot tell what a program will do with some input any faster than actually running it, and this, bounded halting, is not something we can work around.


Decidability (as a property of decision problems like the halting problem) is a different notion, though: In the context of incompleteness, an undecidable statement is a statement that cannot be proven or disproved in that system. Decidability of a decision problem is concerned with an effective procedure that, for every instance of the problem, answers yes/no. Specifically for a logical theory, decidability of this theory is deciding for any formula in the same language whether it follows from the theory as a logical consequence.

Neither incompleteness nor undecidability imply the other[0].

[0] https://en.wikipedia.org/wiki/Decidability_(logic)#Relations...


Incompleteness does not imply undecidability, but undecidability implies incompleteness, because completeness implies decidability: in a complete system (powerful enough to describe Turing machines), for every TM M and input x, there's a theorem saying that M halts or one that M doesn't halt (because the system is complete). To decide halting, we therefore just need to enumerate all theorems in the system, and we are guaranteed to stop and find the deciding theorem[1].

In fact, when Gödel saw Turing's work and was convinced that Turing indeed captured the general notion of a formal system, he said that the incompleteness theorem was finally satisfactorily proven for all formal systems.

[1]: See http://www.scottaaronson.com/blog/?p=710


Tortoise/hare loop detection also works pretty well for short-cutting Mandelbrot set iteration :-)


You can also give up recursive enumerability. `True Arithmetic`, which takes as axioms all of the true statements in PA is certainly complete and consistent.


I think the point of the question is that perhaps we could escape it by banishing all self-referential statements?


What is (inescapably) paradoxical about that?


This reminds me of the Berry Paradox: https://en.wikipedia.org/wiki/Berry_paradox


I like very much the program x virus example/citation.. but I would put it in the other way around: "A program can find all virus, unless the operating system is a virus itself."


"Digital Physics" (the movie) is now free on Amazon Prime. It is also available for rent/purchase on iTunes and Vimeo, in case you are kind enough to support an independent filmmaker:) Come explore the world of self-reference, infinity, complexity, Gödel & Turing, and psychedelics with Khatchig!!


Why have I never had to debug a Gödel paradox in my day-to-day programming? Is it because of types? Finite memory? Can someone enlighten me? Thanks


You almost certainly have had to debug a Gödelian paradox in your day-to-day programming. It's just that programmers call it "nontermination" or "infinite loops".

In 1952 the logician Martin Löb invented something called "provability logic" as a way to formulate Gödel's theorem more abstractly, so that we could see how the proof worked without having to work with explicit numerical encodings of formulas. (Think of this as the difference between working with strings versus abstract syntax trees.)

His version of the proof is now called Löb's theorem, and its structure is almost identical to what logicians call Curry's paradox, and which (via the Curry-Howard correspondence) programmers know as the Y combinator. The Y combinator is used to implement recursion in the lambda calculus, specifically including the ability to go into infinite loops.

I blogged about this last year:

http://semantic-domain.blogspot.co.uk/2016/05/lobs-theorem-i...


Isn't Gödel's incompleteness a consequence of overly ambitious material implication properties? When you look at the truth table:

A | B | A -> B

0 | 0 | 1

0 | 1 | 1

1 | 0 | 0

1 | 1 | 1

I fail to see why all except for 1 -> 0 = 0 aren't undefined. I understand reasoning behind why this was done, but mixing unrelated predicates seems like a generally bad idea, even if it allows mechanical proofs.

Aren't there already "complete" systems like relevance logic? Now with computers we can perhaps make proofs we need "relevant" instead of being based on fundamentally incomplete though "simpler" logic?


Logic already has separate symbols for 'A implies B' and 'A proves B', where the first can be considered equivalent to '(not A) or B' for all intents and purposes. Since 'A proves B' seems like the very definition of relevance, the only part you could possibly object to is that 'A proves B' implies 'A implies B', but that's given since the alternative would lead to a contradiction where A proves B, but B is false while A is not.

Also, if I understand you correctly you're trying to 'fix' incompleteness by making A -> B undecidable, which seems like it would achieve the opposite.


The first incompleteness theorem shows that, in a sense, the culprit of a logic's incompleteness is not its "simplicity" but its "complexity": if the logic is rich enough to include Peano arithmetic, then it is incomplete. Compared to mathematics in general, a complete logic system is far less powerful and cannot be used to prove nearly as many interesting things.


Undefined / don't care states also allow for simpler physical implementations. For those whom did EE/CS undergrad might remember Karnaugh maps.


Well no, Gödel's incompleteness theorem doesn't really have anything to do with material implication. It tells us that there is some formula G such that neither G nor ¬G is provable. Proof systems for relevance logic restrict valid derivations compared to classical propositional logic (relevance logic requires you to use the antecedent), so they can prove even fewer things.


  1 -> 0 = 0
What does this mean? Is the 0 after the equals the antecedent? Consequent?


(1 -> 0) = 0.

In other words, (true implies false) is false.


"is false" doesn't mean anything in Godel's system. That's too high level and imprecise. We're dealing with only symbol manipulation here. We only have provability (which really is "there exists a derivation for") - we don't have "truth".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: