Hacker News new | past | comments | ask | show | jobs | submit login
What people get wrong about Bertrand Russell (prospectmagazine.co.uk)
106 points by lermontov on Jan 7, 2020 | hide | past | favorite | 79 comments



As someone who took a few university courses on analytic philosophy covering Russell, I've never heard anyone suggest he "sold out". Maybe that's just people who aren't really familiar with Russell or his personal life that say that?

If you're going to argue that he wrote sellout books, then he did that long before he gave up on logicism. See for example Introduction to Mathematical Philosophy, which is aimed at people who know nothing above regular public school math.

I think the ability to recognize that he had basically hit a dead end with logicism was probably his greatest success, instead of falling prey to a sunk-cost fallacy. The guy wrote one of the most influential papers in philosophy of the 20th century ("On Denoting"), as well as being one of the central figures in the founding of an entire field of study, but he knew when to call it quits, and I doubt he regretted it at all. His social and political writings and activism are a treasure trove of wisdom IMHO.

Edit: if I'm being fair though, it is pretty common for people to say History of Western Philosophy was a bit of a poor work that he did just because it would sell well. That is just one book out of hundreds of works though, and honestly it's not that bad if you balance it with more neutral sources on some of the material.


I don't think History of Western Philosophy is bad but it is opinionated. If someone were expecting either a dispassionate overview or heartfelt enthusiasm for all philosophers past I can see how they'd come away disappointed. Especially when he's particularly dismissive.

If you go in expecting Russell's view of Western philosophy through the ages then you'll get just that.


I thought Russell's History of Western Philosophy was interesting precisely because it gave Russell's personal view on all the philosophers. (This is just like Kenneth Clark's Civilisation.) There are already dozens of dry textbooks out there, whereas Russell was uniquely positioned to write an entertaining opinion piece on the topic.


Not to mention he is good at writing and it is a very interesting read


Read it main years ago and found it both enlightening and a good read. There were parts whre he was clearly struggling to understand the work of particular philosophers, but he had no secret of the fact in the book. It was interesting to a great mind grappling to understand the work of another. Parts made me chuckle, too.


History is a terrific book. Whatever its flaws, it's a great read, and often very witty.

Wasn't it based on lectures?


Since he’s up-front that he’s gonna be including opinion, and about his own limitations when it comes to presenting certain philosophies (IIRC for Bergson he was like “look I can’t figure out a way to explain this that doesn’t seem like total bullshit, but I’ll try”) I don’t really get the complaints about the book. I think it’s great. Why would you want Bertrand Russell to write such a book and not provide his analyses? It’s better that way! There are plenty of similar works by people mainly known for writing their histories of philosophy. Go for those if you want a just-the-facts version.


Agreed. I'm much more comfortable with "this is my take on this subject, and here's how I'm a bit biased, but I'll try anyway to be as objective as I can as well" than "this is the way things are"-style proclamations.


The chapter on Hegel's philosophy is worth the weight of the book itself.


As a side note since this discussion is related to Russell and his brilliant contributions, I think a similar discussion can be had regarding Russell's contemporary and co-author Alfred North Whitehead (mentioned breifly in the article).

If Russell can be charged with "selling out" or directing his work toward a more general audience, Whitehead can be accused of the opposite, or perhaps even worse. If you read into his works post Principia (which he co-authored with Russell) you find a brilliant logician and philosopher begin to deviate from commonly held assumptions of Western thought and attempt to articulate a philosophy often at odds with "objective" ways of thinking. His works are interesting yet difficult because he is often so at odds with 20th century science and philosophy that he has to create his own terms to describe phenomena, which he builds upon with increasingly unfamiliar terminology until most readers feel completely alienated and give up.

Imo both Russell and Whitehead were great minds and deserve their fair share of consideration and contemplation, pre- and post- Principia.


Apropos AN Whitehead, I listened to an interesting talk the other week [1] about "process philosophy", specifically in phil of biology. The speaker (John Dupré) thought that he was working along the same lines as Whitehead, but that he was a poor writer so no-one can really be sure if they understood him.

[1] https://www.youtube.com/watch?v=ZhczxmLkGHI


If you're interested in Russell, or mathematics in general, I can't recommend the graphic novel Logicomix highly enough. It's about Russell and the Principia Mathematica


As a former mathematician, I totally disagree.

Logicomix has nearly no mathematical content. The most technical part is a quick description of Hilbert's Hotel, but I thought it was very shallow, since there was no explanation. It did not even try to define infinity, or suggest how to distinguish several kinds of infinity. And, at least in the French edition, the Barber Paradox is wrongly stated!

Logicomix is mostly about people, especially Russell, with the postulate that everyone that worked on the foundations of logic was insane. But if you scrutinize the story, many details are wrong (IIRC, the young years of Russel, Frege's aggressive bursts, the last years of Cantor, …). They bent the reality to obtain the cliché that most people expected: genius mathematicians are mad.


I agree, the graphic novel was entertaining, but not an accurate portrayal of their lives.

If you're looking for something more factual, then Russell's own autobiography is a good place to start. Also "The Cambridge Companion to Bertrand Russell" (edited by Nicholas Griffin) is a source I can vouch for.

https://www.amazon.com/Cambridge-Companion-Bertrand-Companio...


I think there is an issue of mismatched expectations. Logicomix is just a divulgative effort that briefly surveys Russell’s life. As somebody with little knowledge of post-Enlightenment philosophy, I honestly did not come away from it thinking philosophers were mad, but simply that his gallant effort in logicism reached the end of the line once Gödel entered the scene - which is what happened, by all accounts. He looked extremely reasonable, particularly considering how his whole prestige and career depended on the methods Gödel wiped away.

If i remember correctly (been a while since I read it), Logicomix really failed to explain how Russell went on to become relevant in public discourse at large, basically assuming that philosophers are interesting by default... I bought it mostly to find that out (he was very influential on my parents’ generation, as one of the classic intellectuals mentioned in ‘68 movements) and was somehow disappointed to just get the story of a logician instead. Still, it was a coherent story with great reverence for its subject, IMHO. Had it been trying even harder to delve into logic, as you expected, I would have probably thrown it out of the window.


It has little mathematical content and as anaphor pointed out in a sibling comment is is not even an accurate portrayal of Russel's and his contemporaries lives. I think trained mathematicians and the philosophically educated are probably just not the right target audience for the book.

I still found it an interesting and entertaining read. Just a few days ago I gave it to my eight year old and she came up with a lot of good questions while reading it. It is not a children's comic though and it is definitely a book that needs guidance, especially because of the artistic liberty it takes in many respects. For example, the first thing I had to set straight is that Russel's brother was nothing like he was portrayed in the book.


I don't think teaching math was the goal of Logicomix at all.


Also his autobiography is pretty readable and worth reading if you want to know more about his personal life.


Christos Papadimitriou is one of the two authors.


>Arguably Russell saw the limitations of “technical philosophy” more clearly than those who thought of themselves as following in his footsteps. The analytic tradition has produced some great work but too many of its practitioners have conflated rigour with technical argument. I would wager that there is not a single major work of political or moral philosophy which depends on a formal logical proof. What endures of Russell’s logic is of interest only in logic. If Russell wanted to address the problems of real life, he had to leave behind the symbols and numbers that had so captivated him in his youth.

What are the shortcomings of logic? Just incompleteness, or is there more?


For a fun logical proof that proves why you can't use logical proofs for philosophical questions, see the "Tractatus Logico-Philosophicus" by Ludwig Wittgenstein.

https://www.gutenberg.org/files/5740/5740-pdf.pdf Introduction by Bertrand Russell

Spoiler: logic is inherently limited by language. To quote Russell, "In order to understand Mr Wittgenstein’s book, it is necessary to realize what is the problem with which he is concerned. In the part of his theory which deals with Symbolism he is concerned with the conditions which would have to be fulfilled by a logically perfect language. There are various problems as regards language. First, there is the problem what actually occurs in our minds when we use language with the intention of meaning something by it; this problem belongs to psychology. Secondly, there is the problem as to what is the relation subsisting between thoughts, words, or sentences, and that which they refer to or mean; this problem belongs to epistemology. Thirdly, there is the problem of using sentences so as to convey truth rather than falsehood; this belongs to the special sciences dealing with the subject-matter of the sentences in question. Fourthly, there is the question: what relation must one fact(such as a sentence) have to another in order to be capable of being a symbol for that other? This last is a logical question, and is the one with which Mr Wittgenstein is concerned. He is concerned with the conditions for accurate Symbolism, i.e. for Symbolism in which a sentence “means” something quite definite. In practice, language is always more or less vague, so that what we assert is never quite precise."


I'd say the major shortcoming of logic is its inability to express uncertainty.

It's well-suited for mathematical proof as practiced, where axioms and definitions are precisely defined, and there is no reliance on empirical observation with potentially noisy data.

However, most of real-life is not as clear-cut. Deriving the truth of a statement may depend on multiple potentially faulty pieces of evidence which must be taken into account together. For this, one needs to assign probabilities.

This is useful even when applied back to mathematics. In practice, mathematicians form conjectures "likely to be true" long before they are formally proven. Additionally, they must narrow the search space in their minds in order to try the most likely avenues of proof, a process we refer to as "creativity".

Even using probability is only one more step towards solving the question of formally codifying general reasoning. We must also consider factors such as use of language and forming concepts (what precisely IS a "chair", after all?), and further aspects which form a basis for human action and which cannot be logically derived, namely our morality and base goals. Not to mention the entire plethora of such questions with which the field of philosophy concerns itself.

(These are the types of questions to which we will need to find some answer if we are ever to construct a useful generally reasoning AI)

Much as classical Newtonian mechanics is a useful approximation of physics at large scales and low speeds, formal logic is a useful approximation of reasoning at high certainty and low flexibility of interpretation.


> I'd say the major shortcoming of logic is its inability to express uncertainty.

There are formal logics that incorporate uncertainty, non-crisp truth values, or both.


Indeed! I was speaking somewhat imprecisely - I was referring to logic in the sense of Bertrand Russel's work. I should rather say the major shortcoming of CLASSICAL logic is its inability to express uncertainty.

That being said, there are many flavours of non-classical logic and (paraconsistent, multivalued etc.) but their usage remains scarce outside of work in logic itself. Some intuitionistic, constructivist, and computational logics seem to be gaining popularity, especially in computer-related circles (computer-aided proof, numerical methods etc.)


Handling uncertainty is difficult regardless; attaching probabilities to judgements did not ensure the social sciences avoided the replication crisis.

I'm not sure what you mean by "low flexibility of interpretation": purely logical proofs are supposed to assume nothing about interpretation.


> Handling uncertainty is difficult regardless; attaching probabilities to judgements did not ensure the social sciences avoided the replication crisis.

Indeed! I'd say that using probabilities is necessary (or at least very useful), but not sufficient for handling uncertainty.

> I'm not sure what you mean by "low flexibility of interpretation": purely logical proofs are supposed to assume nothing about interpretation.

Yes. I found this part hard to phrase. What you are saying is correct.

I meant it in the sense that there are some assumptions made in logic which do not necessarily hold in normal reasoning. As a simple example, classical logic requires no contradictions whereas the average person may hold several contradictory beliefs without going insane (human compartmentalization is there for a reason, after all!). Paraconsistent logics aim to address this. Classical logic also does not take into account the passage of time.

But by "flexibility of interpretation" I meant something like in logic to derive the truth or falsehood of a statement P(X) about some element X, we can only use known facts about X (i.e., previous statements Q, R, S). We pin down a very specific definition which we can interpret any way we want, but we pick axioms Q, R, S to match our interpretation. This is what I meant by the "interpretation of X is inflexible" (poor phrasing). I mean that the properties / axioms are decided at the beginning and do not shift.

However, when we reason in every day life about e.g. chairs, you and I don't start by pinning down an exact definition of what a "chair" is - we assume some shared knowledge and then debate despite starting from different worldviews. During the debate, we might decide to start pinning down a definition of a chair for the purpose of the debate (is it "something that has been created with the explicit purpose of being sat on"? What is "created"? What if I come across a log that I use to sit on every day? What if I take a dining room chair and stick it to the roof so no-one can sit on it?). If you are convincing enough, the way I use the word "chair" in every day life might change. This is what I mean by the "interpretation of the word chair is flexible".

---

My own background is in mathematics and programming, with interest in mathematical logic. I'm afraid my philosophical background is rather lacking. I'm sure such concepts have been described in some depth by various philosophers, but I'm not sure which, or I would just reference the relevant concepts by their common name / link the relevant Wikipedia / Standford Encyclopedia of Philosophy articles.


> However, most of real-life is not as clear-cut. Deriving the truth of a statement may depend on multiple potentially faulty pieces of evidence which must be taken into account together. For this, one needs to assign probabilities.

This what the fuzzy people want you to believe. The logicians have a better answer. For this you need more context. E.g in programming you would add types, pre- and post conditions. And not this statement will be 85% true. As the current AI hype is pretending.


I'm all for using types, and pre- and post-conditions where applicable, but I don't see how they would be a useful replacement to the situations in which probabilities would apply. Could you elaborate?

To give an example where I think probabilities would be used: consider a recognition AI that should figure out who someone is. You have a phone, on which you have some photos of its owner, some voice recordings, and some text messages. For each of those, the AI can assign probabilities that e.g. my voice matches the recordings, my face matches the photos, and my writing style matches the texts. Then it could combine these into an aggregate estimate probability that the phone belongs to me.

How would you use types and pre- and post- conditions to solve this problem?


Some cases can be solved with probabilities, most not. These can be improved by better context.

The phone either belongs to you or not. Schrödingers cat is either dead or alive, but not 80% dead as some people assert.


Oh, okay... are you referring to fuzzy logic where statements have a partial truth value?

I'm (mostly) referring to the case where the truth value is either true or false, but where you aren't sure, so you can say "80% probability this is your phone".

There are also cases where truth values aren't as clear cut, which I also mention, such as the question of whether or not something IS a chair. (Is a chair taped to the ceiling still a chair? Is a log I sit on out in the middle of the forest a chair? Etc.)


Might be horrible books with good theory see John Rawls Theory of Justice


Minor correction: Gödel's incompleteness theorem dates to 1931, not 1944.

That said, I agree that the later Russell's more popular writing has been given short shrift, and stands the test of time in many ways better than the earlier Russell's logicism.


That is true, Gödel's incompleteness theorem dates to 1931. In 1944 Gödel published his first philosophical paper, entitled "On Russell's Mathematical Logic".

The mistake was probably made because the author Julian Baggini is a philosopher and so he is mostly aware of Gödel's philosophical works and not so much of his mathematical accomplishments.


The Principia Mathematica was surely one of the most useful wrong-turns in history.


I highly recommend The History of Western Philosphy precisely because it’s highly opinionated. It’s got many things to like, including being frank about not knowing the answers, and treating figures such as Augustine and Aquinas as philosophers in their historical context. A very engaging book.


If you are looking for a proper introduction to western philosophy and a history of ideas, I recommend instead Antony Kenny's A New History of Western Philosophy, which academics think is an excellent introduction.


No applied logic is useful, but you need computers because the proof is big. https://en.wikipedia.org/wiki/Automath gets started just around the time he dies, so Russel still made the right call, but lets not sacrifice a good idea "cause Gödel" for all eternity to prop him back up.


I was halfway through the third or fourth paragraph and thoroughly confused which Russel is the author talking about. I don’t understand why the author mentions the first Russel if the article isn’t (mainly) about a comparison between the two.

It confused me so I’ll ask the obvious question: this article is about the second Russel who wrote Why I am not a Christian, right?


I don't know if this is what you are actually asking but it reads like that.

There are no two Russells, it's the same person, just different kind of works in different periods of his life.


> There are no two Russells

What? Okay, I really didn’t realize that it’s just one Russel.

I mean if you read the first paragraph and don’t have background in philosophy you would assume there were two persons named Russel after reading the first paragraph! I mean author says the first Russell was short lived and gives yearX-yearY. So yeah, I thought the first Russell died in yearY!!

I feel stupid now :-/


I never thought that Godel incompleteness of a logical system was particularly damaging; the ability to make self-referential statements is useful, the ability to resolve self-referential paradoxes less so.

Same for the halting problem in CS, which is typically resolved by (sleep 9999; kill -9 $pid). QED. ;-)


Russell's hope was to establish that one could find proof of all true mathematical statements, so that one needed a fixed-for-all-time set of axioms and nothing more. Incompleteness obviously destroys that hope.


There is an interesting, if quite technical, answer here about Goodstein's Theorem which is a very reasonable number theory theorem which cannot be proven in first order logic.

So questions around statements which are true but not provable in certain logical systems do have concrete examples and are interesting imo.

https://math.stackexchange.com/questions/625223/do-we-know-i...


Goodstein's Theorem cannot be proved in Peano axiomatization of natural numbers in first order logic. You can have stronger axiomatization of natural numbers in first order logic that allows to prove Goodstein's Theorem - such axiomatization would contain subset of set theory and transfinite induction up to ordinal e_0. But it is still axiomatixation in first order logic.


As mentioned in that answer, these "natural independence phenomena" do seem much more interesting than Gödel's incompleteness results.


Something like a Turing machine, with its finite states, deterministic transitions, and simple tape seems so mechanical, so physical. So predictable.

Sure, there's the halting problem, but that relies on a paradox.

Surely something as artifical as self-referential statements would seem a bit pointless.

And yet, with the help of that principle, it turns out we can write simple, mechanical programs where if the input is < N we can calculate the answer, and if the input is > N we can't figure out what the program will do (using standard mathematical thinking).

For some N we can get creative, but for a yet bigger N, we may well find ourselves unable to be creative enough to work out if we could ever work out the the answer.

https://en.wikipedia.org/wiki/Busy_beaver#Non-computability

"In 2016, Adam Yedidia and Scott Aaronson obtained the first (explicit) upper bound on the minimum n for which Σ(n) is unprovable in ZFC. To do so they constructed a 7910-state[2] Turing machine whose behavior cannot be proven based on the usual axioms of set theory (Zermelo–Fraenkel set theory with the axiom of choice), under reasonable consistency hypotheses (stationary Ramsey property).[3][4] This was later reduced to 1919 states, with the dependency on the stationary Ramsey property eliminated.[5][6]"


My thoughts exactly. Goedel is just basically "set of all sets that don't contain themselves ... does it contain itself or not?" applied to 'provability'.

It gives you no more profound insight than "you can eff yourself with recursion if you aren't super careful" which is obvious for any beginner programmer who encounters recursion and tries to write whatever he likes in recursive function.

I guess this might have been surprise for mathematicians who always thought they have all infinites at their disposal and thus are completely unrestricted and brushed self-referential paradoxes under the rug as curiosities until Godel showed they can be constructed about things mathematicians care about like provability.

I believe you can make a consistent fully provable axiomatic system by excluding statements that are not provable from your system, as meaningless.

You don't consider whether "number 5 contains itself" is true because it's nonsensical. "This statement is unprovable" can be considered similarily nonsensical not because it wrongly combines math concepts but because it's a self-referential paradox and we choose to not allow that.


> I believe you can make a consistent fully provable axiomatic system by excluding statements that are not provable from your system, as meaningless.

The problem is not how to exclude the unprovable statements. The problem is that the unprovable statements will include statements that are true, but that the system can't prove. Some of those statements are probably not ones you care about, like "this statement is unprovable". But you have no way of knowing that all of the unprovable true statements are like that. Some of them might be ones you do care about.

Thus, the real import of Goedel's theorem is not "you need to exclude unprovable statements"; it is "the intuitively attractive ideal of having a system in which every true statement you care about can be proved is not achievable".


"The intuitively attractive ideal of having a system in which every true statement you care about can be proved is not achievable".

Sure it is. But the system cannot contain infinities or real numbers of infinite precision. The halting problem for deterministic computers with finite memory is solveable - the system must either halt or repeat a state, and if it repeats a state, it's in an endless loop. Yes, the number of states may be very large, but not infinite.

An interesting question to ask is "do we need infinities". In a sense, infinities are a convenience feature to make formulas simpler. Without infinities, everything has edge cases, bounds problems, tolerances, and all the annoyances known to people who do serious computer number-crunching. Theory in that area has lots of case analysis.

It's perhaps unfortunate that Russell lived in the pre-computer era. Simple theorems were essential in the paper and pencil era. In the 1970s and 1980s, Boyer and Moore took on much the same problem as Principia Mathematica. They published "A Computational Logic"[1], and built the first good inductive theorem prover for constructive mathematics. (I used it back then, and a few years ago I got it running again on Common LISP and put it on Github, so you can still run it.[2]) Proofs that come out of that thing have lots of case analysis. Mathematicians used to hate proofs like that. Many still do.

At one point Boyer and Moore formalized what a floating point unit does, and this was used to verify an AMD floating point unit. This followed an embarrassing recall of Intel CPUs that were slightly off on division, and was paid for by AMD. It's a huge, ugly problem to formalize, compared to the ordinary rules for real arithmetic. It is, however, a system in which every true statement you care about can be proved.

The proof of the four-color theorem shook up the field. Thousands of cases. A simple problem with a huge proof that required machine assistance was a new thing back then. Now it's more common. Still makes many mathematicians unhappy.

[1] https://www.cs.utexas.edu/users/boyer/acl.pdf [2] https://github.com/John-Nagle/nqthm


> But the system cannot contain infinities or real numbers of infinite precision.

But, ah, oops, there goes a multitude of truths that I care about and want proved.


> The problem is that the unprovable statements will include statements that are true, but that the system can't prove.

No, that is a misrepresentation of Goedel's results. A theorem that is undecidable (neither provable nor refutable) from a set of axioms cannot be 'true' in the logical sense (because there are models of that set of axioms in which the theorem is true, and other models in which the theorem is false) - see Goedel's completeness theorem, which says that every truth is provable (and vice versa).

Goedel's incompleteness theorems can be understood on the semantic level as the mathematical structure of natural numbers cannot be characterized by a sane set of axioms, so any such attempt (e.g. peano axioms) that describes natural numbers also describes a different mathematical structure (a nonstandard model of arithmetic) and there exists a theorem that is true in one and false in the other model (so that theorem is undecidable).


An undecidable proposition is true in the logical sense if we declare it true and adopt it as an axiom.

It might seem that it's not true in that logical way which insists that truth is derived from existing axiom. But that concept depends on unproven axioms in the first place, and derivation can be a zero-step process.

That is to say the axiom we have added is now derived from an existing axiom (itself) by an empty sequence of logical steps.


How do you find that undecidable proposition? You derive it from axioms. You can't declare it axiomatic if you can't find it because it's undecidable.

The big idea that the Incompleteness Theorem torpedoed was that it's possible to enumerate all and only those theorems that are true/provable for a given set of axioms. To get all of them, you must allow unprovable theorems; to eliminate the unprovable ones, you'll necessarily also eliminate some provable theorems. It's Heisenberg's Uncertainty Principle for logic, and it was exactly that destructive to logical determinism/positivism.


> A theorem that is undecidable (neither provable nor refutable) from a set of axioms cannot be 'true' in the logical sense (because there are models of that set of axioms in which the theorem is true, and other models in which the theorem is false)

Yes, fair point. A more careful way to make the point I was trying to make is that you might care about the models in which the undecidable statement is true.


E.g. we certainly care about geometries in which a pair of lines that cross another line at the same angle are truly parallel, such that they do not meet.

We care about this even though it's not provable from four other postulates of geometry.


This is hairsplitting. It's okay to say that there are true yet unprovable sentences without always carrying "true under the standard interpretation" around. It's understood from context.


IMHO in context of logic, the usual meaning of 'true' is 'true in theory' i.e. 'true in all models of theory'. Using it for 'true in standard model' is kind of sleight of hand to make more bombastic statement that it really is.


Let me quote Torkel Franzen in this issue:

> True in the Standard Model

> The idea is sometimes expressed that instead of speaking of arithmetical statements as true or false, we should say that they are "true in the standard model" or "false in the standard model". The following comment illustrates:

>> This is the source of popular observations of the sort: if Goldbach's conjecture is undecidable in PA, then it is true. This is actually accurate, if we are careful to add "in the standard model" at the end of the sentence.

> The idea in such comments seems to be that if we say that an arithmetical statement A is "true" instead of carefully saying "true in the standard model", we are saying that A is true in every model of PA. This idea can only arise as a result of an over-exposure to logic. In any ordinary mathematical context, to say that Goldbach's conjecture is true is the same as saying that every even number greater than 2 is the sum of two primes. PA and models of PA are of concern only in very special contexts, and most mathematicians have no need to know anything at all about these things. It may of course have a point to say "true in the standard model" for emphasis in contexts where one is in fact talking about different models of PA.


> The problem is that the unprovable statements will include statements that are true, but that the system can't prove.

"5 contains itself" might also be true in some extended system where numbers are considered to also be sets. But in system where numbers are numbers and sets are something different this statement is unprovable because it's nonsensical.

What does even "true" mean if you are allowed to tangle things like "true" and "provable" in self referential paradox and draw conclusions from it?

I think Godel result comes from lax definition of "true". Like Russel's paradox comes from lax definition of how can you define a set.


In a system where numbers are not sets this statement is provably false.


No. It's nonsensical there. It not true or false.

Like "5 smells insanity" is nonsensical in natural language.

You can't even begin to consider if it's true or false because it's not a sentence (in sense of mathemathical logic).

https://en.m.wikipedia.org/wiki/Sentence_(mathematical_logic...


I can concede that it may depend on the precise formal system you are using, but I think you can easily imagine a logic where a predicate ∈ takes two variables (x, y) of any kind and returns "true" iff y is a set and x is in y. In a well-founded system where numbers are ur-elements ∈(5, 5) would be well-formed and equal to "false" as every other expression of the form ∈(x, x).

In this vein I would consider "5 smells insanity" nonsencial in the colloquial sense and at the same time false.


I'm not a mathematican or logician, so take this with a grain of salt, but in my understanding, you can't really say that some unprovable thing is "really really true although unprovable". It's more like, it's independent from the axioms you started with. It might be true or false, depending on what additional axioms you decide to use.


> I believe you can make a consistent fully provable axiomatic system by excluding statements that are not provable from your system, as meaningless.

That's what ZFC does.

While `effing yourself with recursion is a takeaway from his finding, to me at least, it's a striking limitation of formal methods of 'sufficient ability'. There exists a completely understandable query that has no well defined answer. Back when it was felt that mathematics could somehow prove itself, this must have been a crushing blow.


But imagine what we could do if we didn't have incompleteness or the halting problem


The trope of self-referential paradoxes as the foundation for grander proofs has always bugged me. My gut always says "sure, but what if we just banned self-referential paradoxes?"


That's not a thing. The problem isn't self referential statements. The problem is much larger, and self referential statements are the easiest way to construct a counterexample. Self referential statements are a symptom, not a cause.

Trying to construct a mathematical system of axioms is like trying to construct the most powerful system of computation possible that isn't subject to the halting problem.

You want to prove stuff, so you want a powerful system. Eventually, you end up with a system of axioms that is so powerful it can prove contradictory results, or you end up with a language where you cannot prove it halts or does not halt.

You want to not be able to prove untrue true, and you want true things to be unable to be proven false, so you weaken your tools. Either you fail to weaken your tools enough, or you eventually end up unable to prove true statements are true.

In programming, the fact that it's unprovable that a program halts isn't actually a big deal. Most of the normal problems we deal with don't need algorithms which probe the boundaries of halting, and QA picks up the stuff we miss. (usually, and if not, it's the customer's problem, not mine) That doesn't work in math. In math, if you want to publish a paper that proves x, you want to be sure that nobody's going to publish a paper proving !x anytime soon. So you need a restricted language, unlike programmers.

It turns out banning self referential statements makes it really, really hard to prove stuff.


> The problem is much larger, and self referential statements are the easiest way to construct a counterexample. Self referential statements are a symptom, not a cause.

I'm curious, though, if things like the halting problem actually hold in the absence of (certain types of) self-referential statements. To my (very limited) knowledge, all proofs in these areas rely on them.

> Most of the normal problems we deal with don't need algorithms which probe the boundaries of halting, and QA picks up the stuff we miss.

But QA gets a bunch of excuses derived from the halting problem! For example:

> Boss: We have A which works, and we wrote B that is much faster, make sure they are functionally identical. > QA: I can write a lot of tests, run fuzzers, but actually proving that A == B is undecidable!

Also, partially as a consequence of the halting problem, running untrusted code in a sandbox inherently means setting hard timeouts.

> It turns out banning self referential statements makes it really, really hard to prove stuff.

I'm not advocating for stripping out reflection from whatever system. Just saying (based on dumb gut feelings) that maybe we've stopped short of valuable results on things like "The halting problem (ignoring that one gotcha)".


> QA: I can write a lot of tests, run fuzzers, but actually proving that A == B is undecidable!

A == B is undecidable for some choice of A and B. It's may very well not be for the particular A and B that you've handed to QA.

In a practical sense, though, it's worth noting that the excuse doesn't really derive from theoretical undecideability.

What we really care about is "can the benefit here be worth the cost". Depending on how you look at it, when it's undecidable there's infinite cost or no benefit, so in that case the answer is clear. But there are probably arbitrarily hard problems shy of undecidable. We occasionally solve math problems that people tens or hundreds of people have been working hard on for decades - those weren't undecidable.


> I'm curious, though, if things like the halting problem actually hold in the absence of (certain types of) self-referential statements. To my (very limited) knowledge, all proofs in these areas rely on them.

I mean, if "certain types of self-referential statements" we're banning can include all recursion and (dynamic) loops, then the halting problem is trivial - your program halts.


Well, and that is the essence of Gödel's first theorem: you can't, any sufficiently powerful system can generate an infinite number of them.

Options include not letting this bother you (my favourite) and just ignoring it and hoping it doesn't come up, as indeed, it often doesn't.


Well, yah, an infinite number of them exist. I'm curious, though, how many of these principles a la the Halting Problem actually hold if they are excluded?

We take as doctrine that the halting problem is generally undecidable, but I haven't heard anything about any results for the Halting Problem (minus self referential paradoxes). Which is concerning because in practice nobody gives a crap about self referential paradoxes and they very rarely actually arise.


There is Rice's theorem (which is simple extension of Halting Problem and a basic theorem in computability theory), which states that all non-trivial, purely semantic properties of programs are undecidable. For more details, see https://en.wikipedia.org/wiki/Rice%27s_theorem .


Interesting, but...

The Wikipedia article gives 2 proofs for Rice's theorem.

a) As a corollary of Kleene's recursion theorem. This proof seems to rely on quines, and therefore relies on the same principle of self-referential paradox?

b) As a proof by reduction to the halting problem.

I'm not very confident in my interpretation of (a) ...


I think those are called type theories. IIRC that was at least the idea behind the original type theories, before they got co-opted by the CS rabble. :)


the original type theories, before they got co-opted by the CS rabble.

They are the same type theory! I realize you are at least partially joking, but it's worth noting.

Lambda calculus[0] came out of Church's studies[1] in the Foundations of Mathematics - the same field Russell was working on with his Logicism approach. Haskell Curry co-discovered the Curry–Howard correspondence which shows the direct relationship between computer programs and mathematical proofs.

Russell himself developed the idea of Type Hierarchy[2]

[0]https://en.wikipedia.org/wiki/Simply_typed_lambda_calculus

[1] https://en.wikipedia.org/wiki/Type_theory

[2] https://en.wikipedia.org/wiki/Logicism#A_solution_to_impredi...


Right, IIUC type theory started as Bertrand Russell's attempt to do exactly that (and follow through the ramifications - you can't "just" anything in math). Then suddenly, Haskell. Or something.


Then suddenly, Haskell.

Sequence of events:

Russell publishes Principia Mathematica (1910-1913)

David Hilbert studies Principia Mathematica, and keeps working on open questions from it (1913 onwards):

He poses the proof of the consistency of arithmetic (and of set theory) again as the main open problems. In both these cases, there seems to be nothing more fundamental available to which the consistency could be reduced other than logic itself. And Hilbert then thought that the problem had essentially been solved by Russell’s work in Principia. Nevertheless, other fundamental problems of axiomatics remained unsolved, including the problem of the “decidability of every mathematical question,” which also traces back to Hilbert’s 1900 address.

These unresolved problems of axiomatics led Hilbert to devote significant effort to work on logic in the following years. In 1917, Paul Bernays joined him as his assistant in Göttingen. In a series of courses from 1917–1921, Hilbert, with the assistance of Bernays and Behmann, made significant new contributions to formal logic.[1]

In 1928 Haskell Curry moved to Germany to study under David Hilbert

Curry's work developed into the Curry–Howard correspondence[2] which led to the language being named after him.

[1] https://plato.stanford.edu/entries/hilbert-program/#1.2

[2] https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspon...


> But although it is true that Russell’s political writings were often > naive and simplistic, ...

Say what? Of all the books on politics I've read in my youth Russell's were some of best. Not simplistic but written in a clear language. Not naive but stemming from the rich classical liberal and socialist tradition of the likes of Wilhelm von Humboldt and John Stuart Mill. A tradition that has been completely erased from the history books in the last decades (which ought to make his political works all the more interesting).

(I fully accept the possibility that I am in fact naive and simple-minded, of course. :D)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: