I never thought that Godel incompleteness of a logical system was particularly damaging; the ability to make self-referential statements is useful, the ability to resolve self-referential paradoxes less so.
Same for the halting problem in CS, which is typically resolved by (sleep 9999; kill -9 $pid). QED. ;-)
Russell's hope was to establish that one could find proof of all true mathematical statements, so that one needed a fixed-for-all-time set of axioms and nothing more. Incompleteness obviously destroys that hope.
There is an interesting, if quite technical, answer here about Goodstein's Theorem which is a very reasonable number theory theorem which cannot be proven in first order logic.
So questions around statements which are true but not provable in certain logical systems do have concrete examples and are interesting imo.
Goodstein's Theorem cannot be proved in Peano axiomatization of natural numbers in first order logic. You can have stronger axiomatization of natural numbers in first order logic that allows to prove Goodstein's Theorem - such axiomatization would contain subset of set theory and transfinite induction up to ordinal e_0. But it is still axiomatixation in first order logic.
Something like a Turing machine, with its finite states, deterministic transitions, and simple tape seems so mechanical, so physical. So predictable.
Sure, there's the halting problem, but that relies on a paradox.
Surely something as artifical as self-referential statements would seem a bit pointless.
And yet, with the help of that principle, it turns out we can write simple, mechanical programs where if the input is < N we can calculate the answer, and if the input is > N we can't figure out what the program will do (using standard mathematical thinking).
For some N we can get creative, but for a yet bigger N, we may well find ourselves unable to be creative enough to work out if we could ever work out the the answer.
"In 2016, Adam Yedidia and Scott Aaronson obtained the first (explicit) upper bound on the minimum n for which Σ(n) is unprovable in ZFC. To do so they constructed a 7910-state[2] Turing machine whose behavior cannot be proven based on the usual axioms of set theory (Zermelo–Fraenkel set theory with the axiom of choice), under reasonable consistency hypotheses (stationary Ramsey property).[3][4] This was later reduced to 1919 states, with the dependency on the stationary Ramsey property eliminated.[5][6]"
My thoughts exactly. Goedel is just basically "set of all sets that don't contain themselves ... does it contain itself or not?" applied to 'provability'.
It gives you no more profound insight than "you can eff yourself with recursion if you aren't super careful" which is obvious for any beginner programmer who encounters recursion and tries to write whatever he likes in recursive function.
I guess this might have been surprise for mathematicians who always thought they have all infinites at their disposal and thus are completely unrestricted and brushed self-referential paradoxes under the rug as curiosities until Godel showed they can be constructed about things mathematicians care about like provability.
I believe you can make a consistent fully provable axiomatic system by excluding statements that are not provable from your system, as meaningless.
You don't consider whether "number 5 contains itself" is true because it's nonsensical. "This statement is unprovable" can be considered similarily nonsensical not because it wrongly combines math concepts but because it's a self-referential paradox and we choose to not allow that.
> I believe you can make a consistent fully provable axiomatic system by excluding statements that are not provable from your system, as meaningless.
The problem is not how to exclude the unprovable statements. The problem is that the unprovable statements will include statements that are true, but that the system can't prove. Some of those statements are probably not ones you care about, like "this statement is unprovable". But you have no way of knowing that all of the unprovable true statements are like that. Some of them might be ones you do care about.
Thus, the real import of Goedel's theorem is not "you need to exclude unprovable statements"; it is "the intuitively attractive ideal of having a system in which every true statement you care about can be proved is not achievable".
"The intuitively attractive ideal of having a system in which every true statement you care about can be proved is not achievable".
Sure it is. But the system cannot contain infinities or real numbers of infinite precision. The halting problem for deterministic computers with finite memory is solveable - the system must either halt or repeat a state, and if it repeats a state, it's in an endless loop. Yes, the number of states may be very large, but not infinite.
An interesting question to ask is "do we need infinities". In a sense, infinities are a convenience feature to make formulas simpler. Without infinities, everything has edge cases, bounds problems, tolerances, and all the annoyances known to people who do serious computer number-crunching. Theory in that area has lots of case analysis.
It's perhaps unfortunate that Russell lived in the pre-computer era. Simple theorems were essential in the paper and pencil era. In the 1970s and 1980s, Boyer and Moore took on much the same problem as Principia Mathematica. They published "A Computational Logic"[1], and built the first good inductive theorem prover for constructive mathematics. (I used it back then, and a few years ago I got it running again on Common LISP and put it on Github, so you can still run it.[2]) Proofs that come out of that thing have lots of case analysis. Mathematicians used to hate proofs like that. Many still do.
At one point Boyer and Moore formalized what a floating point unit does, and this was used to verify an AMD floating point unit. This followed an embarrassing recall of Intel CPUs that were slightly off on division, and was paid for by AMD. It's a huge, ugly problem to formalize, compared to the ordinary rules for real arithmetic. It is, however, a system in which every true statement you care about can be proved.
The proof of the four-color theorem shook up the field. Thousands of cases. A simple problem with a huge proof that required machine assistance was a new thing back then. Now it's more common. Still makes many mathematicians unhappy.
> The problem is that the unprovable statements will include statements that are true, but that the system can't prove.
No, that is a misrepresentation of Goedel's results. A theorem that is undecidable (neither provable nor refutable) from a set of axioms cannot be 'true' in the logical sense (because there are models of that set of axioms in which the theorem is true, and other models in which the theorem is false) - see Goedel's completeness theorem, which says that every truth is provable (and vice versa).
Goedel's incompleteness theorems can be understood on the semantic level as the mathematical structure of natural numbers cannot be characterized by a sane set of axioms, so any such attempt (e.g. peano axioms) that describes natural numbers also describes a different mathematical structure (a nonstandard model of arithmetic) and there exists a theorem that is true in one and false in the other model (so that theorem is undecidable).
An undecidable proposition is true in the logical sense if we declare it true and adopt it as an axiom.
It might seem that it's not true in that logical way which insists that truth is derived from existing axiom. But that concept depends on unproven axioms in the first place, and derivation can be a zero-step process.
That is to say the axiom we have added is now derived from an existing axiom (itself) by an empty sequence of logical steps.
How do you find that undecidable proposition? You derive it from axioms. You can't declare it axiomatic if you can't find it because it's undecidable.
The big idea that the Incompleteness Theorem torpedoed was that it's possible to enumerate all and only those theorems that are true/provable for a given set of axioms. To get all of them, you must allow unprovable theorems; to eliminate the unprovable ones, you'll necessarily also eliminate some provable theorems. It's Heisenberg's Uncertainty Principle for logic, and it was exactly that destructive to logical determinism/positivism.
> A theorem that is undecidable (neither provable nor refutable) from a set of axioms cannot be 'true' in the logical sense (because there are models of that set of axioms in which the theorem is true, and other models in which the theorem is false)
Yes, fair point. A more careful way to make the point I was trying to make is that you might care about the models in which the undecidable statement is true.
E.g. we certainly care about geometries in which a pair of lines that cross another line at the same angle are truly parallel, such that they do not meet.
We care about this even though it's not provable from four other postulates of geometry.
This is hairsplitting. It's okay to say that there are true yet unprovable sentences without always carrying "true under the standard interpretation" around. It's understood from context.
IMHO in context of logic, the usual meaning of 'true' is 'true in theory' i.e. 'true in all models of theory'. Using it for 'true in standard model' is kind of sleight of hand to make more bombastic statement that it really is.
> The idea is sometimes expressed that instead of speaking of arithmetical statements as true or false, we should say that they are "true in the standard model" or "false in the standard model". The following comment illustrates:
>> This is the source of popular observations of the sort: if Goldbach's conjecture is undecidable in PA, then it is true. This is actually accurate, if we are careful to add "in the standard model" at the end of the sentence.
> The idea in such comments seems to be that if we say that an arithmetical statement A is "true" instead of carefully saying "true in the standard model", we are saying that A is true in every model of PA. This idea can only arise as a result of an over-exposure to logic. In any ordinary mathematical context, to say that Goldbach's conjecture is true is the same as saying that every even number greater than 2 is the sum of two primes. PA and models of PA are of concern only in very special contexts, and most mathematicians have no need to know anything at all about these things. It may of course have a point to say "true in the standard model" for emphasis in contexts where one is in fact talking about different models of PA.
> The problem is that the unprovable statements will include statements that are true, but that the system can't prove.
"5 contains itself" might also be true in some extended system where numbers are considered to also be sets. But in system where numbers are numbers and sets are something different this statement is unprovable because it's nonsensical.
What does even "true" mean if you are allowed to tangle things like "true" and "provable" in self referential paradox and draw conclusions from it?
I think Godel result comes from lax definition of "true". Like Russel's paradox comes from lax definition of how can you define a set.
I can concede that it may depend on the precise formal system you are using, but I think you can easily imagine a logic where a predicate ∈ takes two variables (x, y) of any kind and returns "true" iff y is a set and x is in y. In a well-founded system where numbers are ur-elements ∈(5, 5) would be well-formed and equal to "false" as every other expression of the form ∈(x, x).
In this vein I would consider "5 smells insanity" nonsencial in the colloquial sense and at the same time false.
I'm not a mathematican or logician, so take this with a grain of salt, but in my understanding, you can't really say that some unprovable thing is "really really true although unprovable". It's more like, it's independent from the axioms you started with. It might be true or false, depending on what additional axioms you decide to use.
> I believe you can make a consistent fully provable axiomatic system by excluding statements that are not provable from your system, as meaningless.
That's what ZFC does.
While `effing yourself with recursion is a takeaway from his finding, to me at least, it's a striking limitation of formal methods of 'sufficient ability'. There exists a completely understandable query that has no well defined answer. Back when it was felt that mathematics could somehow prove itself, this must have been a crushing blow.
The trope of self-referential paradoxes as the foundation for grander proofs has always bugged me. My gut always says "sure, but what if we just banned self-referential paradoxes?"
That's not a thing. The problem isn't self referential statements. The problem is much larger, and self referential statements are the easiest way to construct a counterexample. Self referential statements are a symptom, not a cause.
Trying to construct a mathematical system of axioms is like trying to construct the most powerful system of computation possible that isn't subject to the halting problem.
You want to prove stuff, so you want a powerful system. Eventually, you end up with a system of axioms that is so powerful it can prove contradictory results, or you end up with a language where you cannot prove it halts or does not halt.
You want to not be able to prove untrue true, and you want true things to be unable to be proven false, so you weaken your tools. Either you fail to weaken your tools enough, or you eventually end up unable to prove true statements are true.
In programming, the fact that it's unprovable that a program halts isn't actually a big deal. Most of the normal problems we deal with don't need algorithms which probe the boundaries of halting, and QA picks up the stuff we miss. (usually, and if not, it's the customer's problem, not mine) That doesn't work in math. In math, if you want to publish a paper that proves x, you want to be sure that nobody's going to publish a paper proving !x anytime soon. So you need a restricted language, unlike programmers.
It turns out banning self referential statements makes it really, really hard to prove stuff.
> The problem is much larger, and self referential statements are the easiest way to construct a counterexample. Self referential statements are a symptom, not a cause.
I'm curious, though, if things like the halting problem actually hold in the absence of (certain types of) self-referential statements. To my (very limited) knowledge, all proofs in these areas rely on them.
> Most of the normal problems we deal with don't need algorithms which probe the boundaries of halting, and QA picks up the stuff we miss.
But QA gets a bunch of excuses derived from the halting problem! For example:
> Boss: We have A which works, and we wrote B that is much faster, make sure they are functionally identical.
> QA: I can write a lot of tests, run fuzzers, but actually proving that A == B is undecidable!
Also, partially as a consequence of the halting problem, running untrusted code in a sandbox inherently means setting hard timeouts.
> It turns out banning self referential statements makes it really, really hard to prove stuff.
I'm not advocating for stripping out reflection from whatever system. Just saying (based on dumb gut feelings) that maybe we've stopped short of valuable results on things like "The halting problem (ignoring that one gotcha)".
> QA: I can write a lot of tests, run fuzzers, but actually proving that A == B is undecidable!
A == B is undecidable for some choice of A and B. It's may very well not be for the particular A and B that you've handed to QA.
In a practical sense, though, it's worth noting that the excuse doesn't really derive from theoretical undecideability.
What we really care about is "can the benefit here be worth the cost". Depending on how you look at it, when it's undecidable there's infinite cost or no benefit, so in that case the answer is clear. But there are probably arbitrarily hard problems shy of undecidable. We occasionally solve math problems that people tens or hundreds of people have been working hard on for decades - those weren't undecidable.
> I'm curious, though, if things like the halting problem actually hold in the absence of (certain types of) self-referential statements. To my (very limited) knowledge, all proofs in these areas rely on them.
I mean, if "certain types of self-referential statements" we're banning can include all recursion and (dynamic) loops, then the halting problem is trivial - your program halts.
Well, yah, an infinite number of them exist. I'm curious, though, how many of these principles a la the Halting Problem actually hold if they are excluded?
We take as doctrine that the halting problem is generally undecidable, but I haven't heard anything about any results for the Halting Problem (minus self referential paradoxes). Which is concerning because in practice nobody gives a crap about self referential paradoxes and they very rarely actually arise.
There is Rice's theorem (which is simple extension of Halting Problem and a basic theorem in computability theory), which states that all non-trivial, purely semantic properties of programs are undecidable. For more details, see https://en.wikipedia.org/wiki/Rice%27s_theorem .
The Wikipedia article gives 2 proofs for Rice's theorem.
a) As a corollary of Kleene's recursion theorem. This proof seems to rely on quines, and therefore relies on the same principle of self-referential paradox?
b) As a proof by reduction to the halting problem.
I'm not very confident in my interpretation of (a) ...
I think those are called type theories. IIRC that was at least the idea behind the original type theories, before they got co-opted by the CS rabble. :)
the original type theories, before they got co-opted by the CS rabble.
They are the same type theory! I realize you are at least partially joking, but it's worth noting.
Lambda calculus[0] came out of Church's studies[1] in the Foundations of Mathematics - the same field Russell was working on with his Logicism approach. Haskell Curry co-discovered the Curry–Howard correspondence which shows the direct relationship between computer programs and mathematical proofs.
Russell himself developed the idea of Type Hierarchy[2]
Right, IIUC type theory started as Bertrand Russell's attempt to do exactly that (and follow through the ramifications - you can't "just" anything in math). Then suddenly, Haskell. Or something.
Russell publishes Principia Mathematica (1910-1913)
David Hilbert studies Principia Mathematica, and keeps working on open questions from it (1913 onwards):
He poses the proof of the consistency of arithmetic (and of set theory) again as the main open problems. In both these cases, there seems to be nothing more fundamental available to which the consistency could be reduced other than logic itself. And Hilbert then thought that the problem had essentially been solved by Russell’s work in Principia. Nevertheless, other fundamental problems of axiomatics remained unsolved, including the problem of the “decidability of every mathematical question,” which also traces back to Hilbert’s 1900 address.
These unresolved problems of axiomatics led Hilbert to devote significant effort to work on logic in the following years. In 1917, Paul Bernays joined him as his assistant in Göttingen. In a series of courses from 1917–1921, Hilbert, with the assistance of Bernays and Behmann, made significant new contributions to formal logic.[1]
In 1928 Haskell Curry moved to Germany to study under David Hilbert
Curry's work developed into the Curry–Howard correspondence[2] which led to the language being named after him.
Same for the halting problem in CS, which is typically resolved by (sleep 9999; kill -9 $pid). QED. ;-)