Hacker News new | past | comments | ask | show | jobs | submit login
Why I Don't Love Gödel, Escher, Bach (infinitenegativeutility.com)
578 points by DyslexicAtheist on July 5, 2018 | hide | past | favorite | 332 comments



I'd owned this book for many years and had a similar experience to OP. But one day someone gave me `I am a Strange Loop`, which I started reading and enjoyed way more. After getting in a little bit Hofstadter makes some apologies for GEB saying it was the sum of work of a very young person. I think he was 24? I think it's pretty incredible considering his age. The things that lead him to thinking critically about consciousness because of his disabled sister I found to be something that changed not only how I interacted with people that were differently abled, but also changed how I saw my own disabilities. Sure it's more than a bit full of itself, but I can't for the life of me think how I would edit it any differently. It's honest from the place he was standing. With a lot of thought, a desire to share, and a perspective that no one else could just stumble on. IMO, it really does kind of have to be what it is.

So, I decided to put down IAASL and try to really get through GEB first.. like for real this time... and found an Open Courseware[0] on the book to follow hoping that would help me really cut through it. It did! The trick that did it for me was the prof's suggestion that you just skip the first 300 or so pages and he picked up from there.

I'd thumbed through it and much like op had "looked" at every page, but once I skipped the first 300 and followed along with the course, it was like butter. There is something funny about all the intro dialogs that can fatigue by the time you get to the meat.

That said, I really enjoyed his reflection on GEB in IAASL and enjoyed the read of IAASL a lot more. Regardless of what you think about what Hofstadter proposes, I think his contribution to critical exploration of the consciousness is artful and invaluable. I think it's fun, compassionate, and beautiful, and it really changed how I see myself and my environment.

It sticks with me and I think about it a lot and it makes me happy to share it with people.

[0] https://ocw.mit.edu/high-school/humanities-and-social-scienc...


> think he was 24? I think it's pretty incredible considering his age.

This makes sense. I read the book at age of 20 or so, and it made me to change my life and change my career path. It's very playful, it informs, connects and educates. I think older and more cynical me might not have enjoyed it so much.

I think "Metamagical Themes" and the "Fluid concepts and analogies" are his best books although GEP had the largest impact on me. They are more refined explorations of ideas.

The value of Hofstader's life work is his single minded exploration of one of the hardest problems in cognitive science. Unlike conventional AI/ML research he is tackling the problem beyond the reach of the state of the art and refusing to pick easier problems to solve even when he can't make progress.

Hofstader's main proposition is that analogy making is the core of human cognition. He has made repeated attempts to pin it down into something that is formal using toy examples and programs (copycat, letter spirit). I think his approach has been very insightful even if nothing comes out of it.

In my opinion the "Fluid concepts and analogies" is the culmination of his work. GEB and "Metamagical Themeas" are the predule, what came afterwards is more like prose and and maybe reflection of his personal tragedy.


> Metamagical Themes

It's Metamagica Themas, I think - an anagram of 'Mathematical Games'; the Martin Gardner's column in Scientific American which proceeded it.

A lovely book.


I was briefly very confused until I figured out you were correcting Themes to Themas, and the dropped "l" was just an accident!


If you were only briefly confused, I congratulate you - it would have confused the heck out of me for much longer.


> Metamagica Themas

Metamagical Themas

(You left out the L.)


Metamagical Thomas


This parallels its influence on me as well. In particular FCaCA has been a good source to learn how to narrow down problems so they can be tackled without diluting the original essence of the problem. Masterful treatment in my opinion.

I still think Hofstadter's "old school AI" thought processes warrant more attention in today's DL frenzy than they get. His books have been a great source of very interesting questions including style transfer (he dealt with fonts in "Letter Spirit"). It was also the inspiration behind some early experiments using n-nets [1] (shameless plug)

[1] https://blog.imaginea.com/deep-type/


For anyone scratching their head, FCaCA is https://en.wikipedia.org/wiki/Fluid_Concepts_and_Creative_An...


> I'd owned this book for many years and had a similar experience to OP. But one day someone gave me `I am a Strange Loop`, which I started reading and enjoyed way more.

Thank you for sharing your experience, I may have to give `I am a Strange Loop` a try after all, and do the skip-300-pages trick with GEB.

I've tried to get through it multiple times, but I never managed to get far into the later chapters without literally falling asleep. I do not blame on the book; it's likely a consequence of my ADD. However, a significant portion of my friends did manage to get through it, and the summaries and explanations they have given of it were very inspiring. Another reason for me to conclude that it's not the book, but me ;)

Perhaps interestingly, I typically have no trouble with reading long, dense books. One of my favourite books ever is "The Master & His Emissary", an incredibly flawed but also very inspiring tome mixing loose interpretations of brain science, art history and philosophy (the best way to deal with its flaws is to approach any "scientific" argument as a philosophical line of reasoning inspired by a layman's (mis)interpretation of the underlying science, but that it's the reasoning itself that matters). I wonder if the difference in writing styles between TM&HE and GEB could tell us something about what kind of stimuli an ADHD person does and does not respond to?


I also live with ADHD. I think that's maybe why the dialogs at the beginning were so impenetrable to me without context. They are important and do a great job of illustrating what he's getting to, but for me it just felt like reading nonsense until I had some context about why they were exploring. I found myself in my initial readings not just bored but kind of offended by how wanky they were. Once I got a better understanding of what he was trying to say, I could see the weave of the thing and even have a few laughs along the way.


> I also live with ADHD.

I wonder if you would like McGilchrist's book then. It's full of associations and drawing lines between seemingly unrelated things, and with sentences containing more than seven comma's not being uncommon.


I've had a hard time with GEB too. Tried a few times. I felt ashamed because my progress was more in the tens than over a hundred.

Writing style is important for me too. I can complete a narrative, story-style book in 2 days. I can't complete a non-fiction book in that time that's half the size. My favourite books are ones that set out to teach something practical in the form of a story (like The Goal).

I have finished non-fiction books. They tend to be fast paced and immediate with what I want out of them. To the point.

In general, that fits with the ADHD type i.e cut the bullshit and get the point!


> "The Master & His Emissary", an incredibly flawed but also very inspiring tome mixing loose interpretations of brain science, art history and philosophy

What are some of the flaws? Or what would I be able to read in order to notice them?


To give a full list would be impossible, simply because the book is so dense, plus it wouldn't make much sense without context. The "Reception" section of the book's wikipedia entry gives a good general overview of what's good and bad about it[0].

Basically, McGrilchrist makes claims about what we know about how the brain works with a certainty that we simply don't have, let alone draw conclusions from it as if they're facts. He's also prone to "start from a viewpoint and find evidence that matches it"-line of thought (Popper would not be happy...). But if you are aware of that and appreciate more how the whole overarching narrative fits together it's a very enlightening book with very inspiring lines of thought.

Anyway, in my experience it goes very well with:

- "Metaphors We Live By" and other works by Lakoff & Johnson, the book that introduced the concept of "conceptual metaphor" to the world. They are referred to a lot and having read them helps you spot when McGilchrist might be misinterpreting their work.

- "The Stuff of Thought" by Steven Pinker, who asides from writing a really fun book on language, also gives some good arguments tempering Lakoff & Johnson's overenthusiasm a bit without being dismissive of them (although in their defense, metaphors are criminally underrated in their importance to thinking).

- most books by Oliver Sacks, who is more qualified to make statements about the brain than anyone else on this list, but "Seeing Voices" especially, because it really dives into the connection between language and brain development.

[0] https://en.wikipedia.org/wiki/The_Master_and_His_Emissary#Re...


This.

100 times, this.

GEB is an alright book, but nobody I know has actually managed to finish it. I Am A Strange Loop discusses the exact same thing, but does so with the experience of years of writing in metaphor, and more importantly, entirely in metaphor. All of the metaphors are well-crafted in that they are both a pleasure to imagine, can be recalled rather easily, and accurately map to what the author is trying to express. IAASL also distills the main thesis of GEB, so it is not only more expressive but much more to-the-point (Which I, too think is the main problem with GEB as a book).

I do not hesitate to recommend -- for all those that struggled with GEB, or all those that wish to read GEB, pick up a copy of IAASL, and then if you still wish to read GEB, try it after reading IAASL.


Sometimes when my girlfriend and I are laying in bed after a long day and just aren't quite ready to unplug and sleep, we take turns reading randomly selected sections from IAASL. There's a couple other books in the rotation, but it's a fun one. The mini theater of it becomes kind of self referential as we try to mess up how the other person feels about the reality being thrown at them.

You really can't do that with GEB. It's still valuable, but I think Hofstadter covers the issues with GEB himself just fine so I won't pick it apart.

But yeah, we just pick a section at random and try to sell it to each other with a read. Sometimes we do kind of a theatrical Exquisite Corpse paragraph to paragraph. It's silly, but something about not only the content, but the context of trying to mess up each other's consciousness with the read is not only fun but after 5 minutes you're having a laugh and forget about whatever you cared about all day. Turtles all the way down. Lol.

I know it's dumb, but I just wanted to share. Works ok with Garfield comics too.


You're a lucky person to have a partner who appreciates this!


Yeah I'm jealous too


I'm curious what the other books in the rotation are, if you don't mind sharing.


I read the whole book in almost one go over a weekend when I was 20. I think I took 3 hour break to sleep. I was very hungry and tired but I was unable to stop.

I read it again in next six months in small pieces afterwards.


Same same. And have reread any number of times over the years.


So now I have to get it out and read it a fourth (fifth?) time. I don't really have the time but I have to.


> nobody I know has actually managed to finish it

Well, I guess that depends on what you mean by "finish it". Like the blog post guy says, he "looked at every page", but he's not sure he actually absorbed everything about it there was to absorb.


“... but nobody I know has actually managed to finish it.“

I’ve read it cover to cover, twice. It’s not a bad book, but it would have helped if he had used the Lambda calculus. YMMV.


Would that have helped the non-mathematicians that read it? I'm in that camp and never quite grasped the whole Lambda thing.

(Maybe you know of something I could read that would fix that?)


Best resource I've found is a book called "Introduction to Functional Programming through Lambda Calculus": https://www.amazon.com/Introduction-Functional-Programming-C...


You're not supposed to finish GEB, there's even a section in the book about that.


"[...]but nobody I know has actually managed to finish it."

I read the book entirely from first to last page, including bibliography, five times in five years. I was at the age of appr. 19-24 and I did not read many other books in that time. Everytime I read it again, it felt like a new book. I keep casually reading random parts of it until today.

I like Getty's review, although I do not share it at all. I learned many years ago, from a very wise friend, that when I struggle - for example with certain habits of my partner - that the root of my struggling is most likely not within the other, but myself. It points to some personal deficit. This friend taught me to explore myself, before I demand any change from the other. Why is it bothering me, what would I have to change in myself to get over it?

Applying this rule, in Getty's place, I would ask myself: "how comes, that so many people are appealed by this book in a way that I am not? What makes those readers, that share my criticism and those that don't, different?". This sort of introspection is very difficult and sometimes impossible. But in the case of Getty's cognition of GEB I might give some clues:

The parts of the book that receive most criticism in Getty's review are those parts, that I would consider the entertaining sections. For example: never did I perceive the dialogs in the book as didactic. I always perceived the dialogs as a rest period, an intake of breath. After long didactic and sometimes hard to swallow scientific passages, the dialogs in their playful and sometimes childish style would allow my brain to settle down, relax and get ready, very softly, for the next topic. Like you would shake your arms and legs between sportive excercises. The dialogs were always welcome intermissions for me, but I did not take them very seriously and I did not, at all, expect them to be an essential part of the didactic mission of the book.

The second major element of criticism in Getty's review are topics in the book that do not belong to the core themes (Gödels proof, logic, formal systems), but are brought in by the author in a way that does not properly embrace them for one and which he should't refer to in the first place, as he has no sufficient proficiency in those topics (according to Getty).

The expection of Getty obviously is, that GEB is supposed to be a very serious scholarly and didactic work, that must not deviate from its core topic. And I believe that is a mis-expectation: Getty expected a textbook, but got a novel. In MY perception GEB is the novel of Gödels proof and as such it can play and entertain and experiment to any degree that the author desires. Of course the author can make fun of Cage and completely miss Zen buddhism. For me, it was absolutely clear and obvious in every paragraph of the book, whether the author was didactic or playful or personal at any point. The mixture is the essence of the book that makes it so appealing to me. Textbooks are often too dry and hard to swallow, novels on scientific topics almost never go deep enough for me. GEB is an experimental format that hit the sweet point for me.

I want to make clear, that Getty is not "wrong" in his perception - it is his view and it is absolutely legit as such. I believe, though, it is a very big error to blame the author for not fulfilling one's own exepctations or try to convince others, to adapt their perception to one's own - there is no point in this. If Getty is really interested what makes him tick differently than others, he should introspect his indisposition with the "soft", playful, not so scientific, didactic aspects of the book. Why is he upset with those parts, where the author deviates from his views? Why is he unable to distinguish between the serious and the playful parts of the book? It might point him to some revealing aspects of his personality. There is no outcome in the sense of right or wrong!

I liked Getty's review - not for what it was proposedly written for: revealing to me weaknesses of the book I haven't discovered yet, or convincing me, that the book that I so much love, is actually not loveable - but for taking the courage and providing me with an insight in another type of brain and perception, and therefore another aspect of personhood, that I found very interesting and entertaining to learn about! This comes with absolutely no assessment from my side!


> the root of my struggling is most likely not within the other, but myself.

If only more people would think like that! (Not thinking of me here ... ;-) )


As someone who enjoyed GEB, I can see where Getty is coming from when I think of how movies with interesting ideas or topics can put me off with their theatrical acting and trite writing. After that it's just a matter of tolerance level and pickiness.


> I think he was 24?

27, but equally impressive nonetheless


Definitely. Thanks for the clarification!

I honestly think it's kind of an important feature of the book. Since it's about the evolution of one's consciousness. At least for me being able to place how old he was as he found himself making the thing helps the parts that can feel sort of strange have the right contextual... extrusion? I recall what it was like to be in my mid 20s, I was thinking a lot about what it meant to be alive and people started suddenly around 25 treating me like what I thought was an adult.

The content of the book being about the procession of consciousness and all, yeah, it's a different book if you know he's 27 than if you think he's 60.


I appreciate the context. I read the first 150 or so pages about 20 years ago and gave up. (Had the same reaction to Don Quixote) I enjoyed the links between disparate topics but the style got to be much.

I’m convinced to pick it back up.


This is interesting, I picked up GEB a while ago and never finished. Got to page 400 (I just checked because it's sitting next to me). I was fatigued from reading the same book for so long, it didn't get boring but it's not an easy read so I found I couldn't read it before bed otherwise I'd have to reread pages for it to sink in. That means I couldn't spend as much time reading it as I could other books which is part of why the fatigue set in I think.

I'm going to try this mit course, I think will help motivate me to get through the rest of the book finally.


The internet says Hofstadter was born in 1945 and GED was published in 1979. ~34 years later?


Sounds about right? What's your point?

If you're trying to extrapolate his age from the publishing, someone below corrected me. He was 27 when he wrote it. A few years between writing and publishing the first edition lines up.


You said 24. That sounded very young. So I looked. No point beyond that. T’wasn’t an attack.


You posted something inaccurate and chasing offered a correction. Instead of e-pretending to miss the point, accept the correction as you did with ramblerman.


I read this book when I was a kid and frequently back then, I was not concerned with reading books from beginning to end. I’d start in the beginning, skip to the middle if it got boring, skip to the end if I wanted to kill a cliffhanger, then go back and read everything in between, or eventually restart from the beginning. So this is how I read GEB. I think I never would have gotten through it from start-to-finish.


It took me too long to figure out that coming into a dense tome can be a dance. You can move around to get an idea, pick out what interests, take a dive, then really do a contextual exploration.

Found this after many years of sort of doing something similar without knowing it was a thing. It honestly felt like cheating sometimes for some reason?

https://en.wikipedia.org/wiki/How_to_Read_a_Book


Thanks for this info. I've attempted and put down geb about 3 times never getting too far. The stories were just childish to me.


I recently started reading the book and when I got to the first dialog part I got incredibly annoyed at how childish and unnatural the writing was.

I've yet to see why people think this book is so life changing.


Thanks for sharing this course, very interesting.

I wish HN had an easy way to save comments like Reddit, but this reply should do the trick :)


https://news.ycombinator.com/user?id=greggarious has your upvoted and favorited comments and stories. (Your favorites are public, your votes private.)


Thanks, that is useful but I upvote lots of stuff so a separate saved list can also be useful :)

For now, I have a bookmarks folder for stuff like this.


The 'favorites' feature is a saved list. It's not widely used. Maybe we should promote it, or maybe get rid of it.


My humble suggestion(s) to make "favorites" more useful (and hopefully more used):

1. add the ability to "favorite" from the front page. I rarely "favorite" anything, because I grew accustomed to using "upvote" as the mechanism to save stories. I would switch, but that extra little bit of friction of needing to click through first, means I stick with my old habit.

2. Add "favorite" for comments as well.

3. (maybe) - consider renaming "favorite" to "save"? Possibly people who are used to seeing the "save" link on Reddit would find that more intuitive.


You can favorite comments but you have to click their timestamp first. It's really hidden. I think most people don't know about the feature.

dang: if you get rid of it please don't do so w/o notice so that those of us who do use it can export the links first.


You can favorite comments but you have to click their timestamp first. It's really hidden.

Heh, there ya go. I'm one of the people who uses the favorite feature (albeit sporadically) and I didn't even know you could do that.

Yeah, surfacing the UI element for "favorite" to make it easier to find would, IMO, make it a lot more usable.


I'd cast my vote for promoting it more, it's a pretty handy feature (at least for comments)... I also like that it lists things in the order of recently favorited rather than chronologically.


Pinboard [0] solves it for me. I tag all the interesting stories with "hackernews", so I can easily go back and search.

I noticed over the years that my use of Pinboard is very asymmetrical: I save several URLs every month, but retrieve at most a handful per year. But knowing that my memories are safely stored gives me peace of mind (and, just to be safe, I make a backup every year or so).

[0] https://pinboard.in


>I noticed over the years that my use of Pinboard is very asymmetrical: I save several URLs every month, but retrieve at most a handful per year

It's better to have a URL and not need, than to need a URL and not have it :)


I use org-protocol-capture[0] (in Emacs) along with Syncthing to push my .org files to all of my devices (including mobile). That way it's local but accessible from 'anywhere' (for me).

[0] https://github.com/alphapapa/org-protocol-capture-html


Terrific comment on your path to getting more from this book. I just purchased IAASL and am going to review the OCW link. This kind of discussion is so valuable in getting through books like GEB which can be very challenging and require starts and stops.


Same here. I read GEB in grad school, as a distraction from the hassles of my thesis work. And I skipped over parts that mystified me.

But then, many years later, I read IAASL. And then reread GEB. Which I could then appreciate much better.


Can you be more specific - exactly from what page should one start reading GEB?


It's been about 2 years I think since I did that. Skip the Socratic dialogs basically. I think the course I linked is the same one I went through. He covers it in the first video.

Also once you get through the other parts, the Socratic discussions are easy to come back to. You just don't end up in nowhere taxed with his presentation before you really find the book.


Wow, I would recommend exactly the opposite. If you wanted a dry statement of Hofstadter's philosophy, you could read I Am A Strange Loop, but GEB is remarkable for its form, including the dialogues.


Eh, worked for me! YMMV.

I couldn't do the dialogs until I got through the rest. I also enjoyed IAASL a bit more than I enjoyed GEB. Definitely not saying what helped me figure the book out is what everyone should do.


I guess this just goes to show there's a variety of reasons people enjoy Hofstadter's work.

Hofstadter, introducing IAASL, said something like: "I wrote this book because people got so distracted by the form of GEB that they didn't get the point I was making."

My reaction: "I understood the point, I just didn't care about it as much as everything else that was happening in GEB."

Kind of similar to my appreciation of Bach himself, in fact. The content of Bach's compositions is frequently about praising God, which has no particular relevance to me. But the form and aesthetics of Bach are wonderful.


Well, if you forget about the fact that Aunt Hillary is the thesis, I suppose.


> Skip the Socratic dialogs basically.

I agree, except make sure not to skip the SHRDLU [1] one, which isn't actually Hofstadter's writing at all, but a demo of the SHRDLU system which is fascinating and should not be missed.

[1] https://en.wikipedia.org/wiki/SHRDLU


You would have to know which printing to be specific about.


Thanks for the OCW link, would you happen to have all the lectures saved? Video for lecture 6 and 7 are missing.


Do you literally mean skip to page 300 and begin?


I would assume the idea is to find a chapter or section start around that point.

Maybe Chapter X, "Levels of Description, and Computer Systems" on page 285. Or Chapter XI, "Brains and Thoughts" on 337.


I read and finished GEB first, and then read 'I am a Strange Loop'.

I think the trick to reading GEB is to not take it so seriously. You can skim sections, you can skip parts, you don't have to understand everything, not everything has to make sense. I think the author here misses that when he complains such about the quality of the metaphors. They're not meant to help understand, as much as being there to give a vague idea about what's coming in a fun way.

If you still don't enjoy GEB, then it's not for you, and that's OK too.

I found 'I am a Strange Loop' a more dry and less enjoyable read than GEB, but it was certainly a lot clearer, and better written in many ways. I would probably recommend it before GEB for most people.


You're hitting the nail on its flattest head here. The metaphors are deliberately broken. The ways in which they break are meant to show aspects of the system.

Theorems have characteristics that aren't obvious to non-mathematicians. Hofstatder's goal was to convey some of these characteristics. A metaphor that says "if a theorem were a record and a mathematical system were a record player, here is what would happen to the record player when it played a self-referential theorem record" is an excellent way to show how theorems are different than anything we normally deal with.

A more modern book would have been able to rely on the metaphor of a program "crashing" which would have been sad and inept. There are a thousand ways a program can crash and very few of them reflect any aspects of Gödelian incompleteness. They're just poorly made, like a record with a big hole in the track that breaks the record needle.

There are essays in Hofstadter's collection "Metamagical Themas" that expand significantly on his idea of what metaphors and analogies actually are and what he thinks they're capable of doing. Reading those makes it very clear why he'd use broken metaphors in GEBEGB.

I think GEBEGB is most useful in exposing people who don't yet have any formal training in mathematical theory to some deeper concepts and inspiring them to find the subtle edges of these systems. I read it at about 12 and it changed my life by exposing subtle fripperies of thought. If I'd read it at 20, as the author of this article did, I too would probably have experienced these little amusements as lack of rigor. Instead I experienced them as mysteries, and now as an adult I experience them as very dry jokes and thought prompts.


> The metaphors are deliberately broken

Also adding to this. A main point of the book is also that there is no logical system that can be complete, so someone trying to get the real big picture needs to be able to live outside logic comfortably as well.

A little bit like Zen thinking. You cannot make a clapping sound with one hand. But you have to accept that the question can exist in a space with zero answers.


I definitely think GEB is important to getting into IAASL, which is why after I got into the book a bit I put it down and really made an effort to get into GEB. IAASL can totally be read on it's own, but I think it's fair to say you at least want to have a copy of GEB around to look at while you're reading it.

I can't recommend that course enough too. I guess I couldn't find the book on my own. So anyone that can, great! But having someone who knows the book take your through it, at least for me, made it a lot more fun. It was, in my experience, kind of a rare book in how I wanted to share what I was looking at with someone but didn't have anyone to share it with really. Having the prof giving a breakdown made it a lot more digestable for me.


What's IAASL?


Hofstadter's other work, I Am a Strange Loop.


one of his other works, rather. there's a lot, including TMI,[0] MT,[1] and FCACL[2] for instance, and I've forgotten some others.

0. The Mind's I

1. Metamagical Themas

2. Fluid Concepts and Creative Analogies


> You can skim sections, you can skip parts, you don't have to understand everything

This was very true of the first (several) times I read through the book. I pulled the book off the shelf of my Dad's library at the age of maybe 10-12 (it had the coolest cover), and couldn't make head nor tail out of most of it. It did have a bunch of interesting pictures and I skipped most of the prose to read the funny and intriguing dialogues.

Each time I re-read the book I felt like I got more out of it, I think I'd read it more than ten times over by the time I was 20.

Having been through GEB a few times made Metamagical Themas easier too, and the same rule of "skip to the next section, or just take a break if I'm not getting it" worked well there too.


I have both books. I read the books using the exact the same approach you suggested. I often skipped parts when I don't want to read.

GEB is not a book for consume only once. I came back again and again. The profound meaning of the book and Godel theorem goes beyond many people can imaging. GEB itself is a metaphor.


IAASL also has one of the best (and easiest to understand) descriptions of non human sentience-when he talks about why he’s a vegetarian.


I am not a vegan, but his consciousness cone is something I seem to bring up frequently with people. Usually by the end of the conversation I don't quite know why I'm not vegan aside from habit. Last time I brought it up I was eating sushi and talking about how I had decided to quit eating cephalopods because they might be smarter than humans... But then ate pork belly and just didn't really know what I stood for anymore. Pigs don't have a complex skin pattern language, but they are smarter than my cats who I would never eat. Other people's cats might be smarter than a pig, mine are pretty dumb.

I need to double down on meal planning and just really quit it.


I have a simple rule - I don't eat any creature that is sentient enough to choose to be vegan...


That's kind of you. I assume your coworkers appreciate that.


All of them? I have my doubts about the sentience of some of mine, although at first blush that appears uncorrelated with eating habits.


So you'll eat humans with intellectual disabilities?


If a logical statement is true, it does not imply the inverse is true, but the contrapositive.

Here is a refresher: https://www.varsitytutors.com/hotmath/hotmath_help/topics/co...

Also, who says the intellectually disabled can't be vegans?


I think he’s simply saying that a subset are not able to even understand that there is a choice to be made, and that the logic then would be that you can eat people from that subset.


Consider that cephalopods are on the menu of other cephalopods despite their mental powers ...


Yeah, but I'm a weird thing that has to do very little to get food. A lot of time my meals come from a car window. I don't quite live the same oceanic experience that a pack of humboldt squid get to traverse. I guess the place that I get to with it is that I have a choice and that is a unique privilage. I have no problem with people eating animals. It's a bizarre luxury of the type of thing I am, where I don't have to kill to live and nothing is trying to kill me that I know of and/or worry about.

If I have that knowledge, potentially I have moral imperative to use it. Compassion for things that I don't have to have it for might be something I'm supposed to do with my time piloting this homunculus.


Pigs are awesome and most of them (all except for wild boar) would not be around if not for us eating them. We should strive for avoiding unnecessary hardship and unsustainably high numbers, not for extermination through total disuse.


I agree with you we should strive to avoid unnecessary hardship. Everything dies, and a life that ends with humane[0] slaughter is not necessarily one that wasn't worth living.

However I suspect we agree some lives are so filled with suffering that the compassionate choice is to not breed a sentient being into that life. Imagine choosing to get pregnant if you had certain evidence it'd damn your future child to a life of physical and mental anguish -- you'd refuse. That's the life of pigs and chickens on modern farms: too much suffering[1], below the threshold of "worth living."

Moreover, the state of the animal welfare movement is bleak[2] compared to the scope. Farm animal advocacy is a niche and regularly ridiculed. Progress is constantly on shaky ground: recent initiatives in MA and CA prohibiting animal products from animals raised in cages too small to turn around in is at serious risk of being overturned by the federal government's "King amendment," which would prevent any state-based welfare reforms. I'm passionately in support of this work, but under no delusion we're near the end of it.

Given that, the strategy I advocate for is both improved treatment of farm animals and reduced breeding of them -- at least until farm animal treatment improves. Hoping that treatment will catch up soon so that it'll obviate the need for dietary change is unrealistic today, though I sincerely hope (and will support those trying) to get there[3].

[0] Caveat that slaughter today is not always humane, in particular for chickens and fish.

[1] E.g. gestation crates and lifetime of small cages, genetic breeding that causes chickens to grow so fast they break their own legs, debeaking practices, and more.

[2] Progress is happening, and in fact the wins we've had are hugely beneficial to animals -- but I suspect decades before we're at "enough" of an improvement. Within that time billions of animals will be raised and slaughtered for food; the scale is hard to fathom.

[3] And frankly, I expect cultured meat and sophisticated plant-based meats like Beyond Burger to gain wide adoption before that.


Is it such a great thing to be born that we are doing something good for certain pigs by causing them to come into existence?



Is it such a great thing to be born that we are doing something good for our children by causing them to come into existence?

I mean, c'mon. The only way for there to be something good, or great, from a first-person experience, is for there to be a first person to actually do the experiencing.

Caveat, of course, bringing something into an existence that is only suffering is probably not so great. This is why we need to treat our food animals well (not to mention our kids!)


David Benatar is the most famous exponent of the opposing view:

https://en.wikipedia.org/wiki/David_Benatar

https://en.wikipedia.org/wiki/Antinatalism

Edit: though several weaker views are more common in animal rights advocacy, for instance that being born in order to be raised for food is not a benefit to an animal, that being born in order to be raised for food in a factory farm is not a benefit to an animal, or that being born is a benefit but that conferring a benefit on an animal doesn't make conferring harms on the same animal legitimate or appropriate even if the benefits and harms are structured as a "package deal".



This is such a patently ridiculous, self-defeating viewpoint. Simple teleology defeats it, in the sense that everyone who holds this view will probably die childless, and will not be very successful at spreading their views generationally.

As for the animals - hey, I said I want them treated decently. I don't extend them the same courtesies as I extend to humans because they're not human, and I don't shed a tear for their deaths because to do so is to anthropomorphize them, a quintessentially childish way of viewing the world.

> being born in order to be raised for food in a factory farm is not a benefit to an animal

Said animal wouldn't exist otherwise, and could not benefit from not existing, so the potential for benefit is always higher on the existence side, even if the probability is very low - it's still higher than a guaranteed 0.


I don't feel like you're really engaging with any of these ideas.

> Simple teleology defeats it, in the sense that everyone who holds this view will probably die childless, and will not be very successful at spreading their views generationally.

This seems like a particularly raw invocation of

https://en.wikipedia.org/wiki/Argumentum_ad_populum

The same reasoning would seem to suggest that religions that successfully encourage people to have many children are most likely to be correct because they provide one method for spreading their views (that is, having children and then teaching them to believe in the religions' doctrines). This is one way that religious views do get spread in the world and one demographic factor in religious belief, but it's hard to see the connection between this and the correctness of each belief as we might otherwise understand correctness.


> Simple teleology defeats it, in the sense that everyone who holds this view will probably die childless, and will not be very successful at spreading their views generationally.

This is very silly, it plainly conflates truth and popularity.

> As for the animals - hey, I said I want them treated decently.

Suggests "decent" treatment of animals is preferable to "indecent" treatment; this is broadly accepted.

> I don't extend them the same courtesies as I extend to humans because they're not human

Suggests animals/animal suffering are/is of less moral importance than humans/human suffering. This is also broadly accepted.

> I don't shed a tear for their deaths because to do so is to anthropomorphize them, a quintessentially childish way of viewing the world.

Suggests anthropomorphisation of animals is undesirable because it is stereotypically childish; childishness is bad. Seems like quite a silly argument, kids are also fond of breathing etc.

> Said animal wouldn't exist otherwise, and could not benefit from not existing, so the potential for benefit is always higher on the existence side, even if the probability is very low - it's still higher than a guaranteed 0.

You care about the expectation, not the "potential for benefit".

In any case, the way in which you've presented these ideas suggests that you're a consequentialist maximiser (eg think that it's the outcomes of actions/choices/rules that matter, and that there exists some partial order of the desirability of possible worlds). I'd suggest that if you keep thinking about animal welfare and how to balance the lives and living conditions of animals against your own social and dietary needs, you may well decide that you want to reduce your meat intake. Peter Singer's "Animal Liberation" is fairly classic, "The Possibility of an Ongoing Moral Catastrophe "[0] is much more general but pretty great.

[0] https://link.springer.com/article/10.1007/s10677-015-9567-7


Based on this logic raising kids to eat 9/10 of them after roasting them alive would be better than declining to have kids because you disregard the suffering in some sort of interesting moral arithmetic.

The bigger issue is not that nature of the math but the notion that you believe you can subject the matter to it like what is the answer to 42 + banana.

The simpler answer is that we ought to do less immoral things not to find interesting justifications for doing so.


We also need to treat our food animals well, because eating poorly treated animals is unhealthy. Both physically, and spiritually. Bad chemicals, and bad karma.


My intuition is that suffering dominates our lives. This is especially true for animals. So I don’t believe we are doing pigs any favours by giving them life.


> My intuition is that suffering dominates our lives

Maybe your life. I feel pretty good. Have you considered doing things to reduce your suffering? There are a lot of great options. Take a few deep breaths and consider whether it's really as bad as all that.

Suffering in the human context is also very relative to tolerance. Experiencing the occasional great suffering (whether physical or emotional) puts your daily aches and pains and disappointments into proper contrast so you can tell that you should learn how to not take them very seriously.


Ending all life would end all suffering, it's one of those dangerous philosophical ideas we prefer not to think about to often. The other extreme would be "every sperm is sacred" and I firmly believe that both should be avoided, but only for pigs.


That should be "but not only for pigs", obviously. Otherwise it would be a very interesting standpoint.


Pet pigs are a thing and have been for a while. Bit of a trend at the moment, actually.


> quit eating cephalopods because they might be smarter than humans

Ahem, what?


Surely op's statement is hyperbole.. I hope.


perhaps https://www.lighter.world can help...

i live with my own temptations in this realm, so i'm not coming from a "i've done it" POV, but from a "this is where i lean when i'm pretty sure i need to give it a real go" one.


The site is sadly broken on Firefox.


Seems okay on my Mac OS X, Firefox versions 60.x & 61.0.1


I've read it cover-to-cover at least three times, gifted a copy, and read it out loud to my son. I find it very playful and mind-expanding. I will admit that it is verbose and wandering, but I think that's part of the charm.

Out of curiosity I wandered through Amazon's 1-star reviews of this book to see what other people had to say. "Obtuse and impenetrable", I can certainly understand. It takes a certain bent to put the effort into reading it, and a certain level of interest to sustain that effort.

The bulk of the critiques, though, including the OP, seem to be that it's too shallow in some specific area. How could it possibly dive any deeper into a fraction of the topics covered without being split into a multi-volume set? Perhaps more economical prose would allow for deeper dives, but personally I think it serves as an excellent layperson's introduction to a huge number of topics in logic, philosophy, theology, biology, music, art, language, etc.


I agree with you for the most part. My central issue while reading was Hofstadter’s self-references. They eventually fall into the background but it was a turn-off for me in the beginning. And they aren’t inherently distasteful but in context of the ‘wandering’ and precarious nature of the work, I had to wonder if the dependency on one’s own series of particular fascinations do have an intellectual corollary or not. If I had to conclude, I’d say they do, but the book itself might still be a house of cards, however inspiring and all the rest.


"House of cards" is an interesting analogy. Most books do a better job of appealing to a wide audience, while GEB comes with a long list of potential pitfalls, even for interested readers.


>>takes a certain bent to put the effort into reading it

Years back, I scanned through a few pages and realized it needed effort as you mention.

OTOH, I think there are 2 kinds of great books: one which just suck you into their world and other which demand effort/tax to pay to immerse in that world. The former are incredible books, while the latter are still great.

I found `On Lisp` by PG in the same latter category though GEB and OL tackle different subjects.


I find the latter to be too tedious usually. If they don't at least start out as the former then the latter will never be finished by me. I think that those two kinds are poles of a spectrum and I believe there is a falloff point not too far past the middle that I find acceptable.

I think it is interesting you say the one end is incredible but the other end is "still great". Does that mean you enjoy them less?


> Does that mean you enjoy them less?

Enjoyment is not the point. I am a different person because of reading GEB, and my coding style changed because of reading On Lisp. With internalisation and comprehension, the first kind of book will change the way that you think forever. This kind of book is going to be a boring if you are unable understand the concepts that it is trying to communicate.

If these concepts are going to change you forever it might take some reflection (i.e. if you continue perusing the entire book at "reading speed" you will hit a point where you no longer understand what the author is talking about due to not sufficiently internalising the earlier material.)


That's lovely, but didn't answer the question. But I can see that perhaps I offended you, I was just curious, so sorry.


> The high-level, slightly handwavey description of this theorem is that, for any sufficiently expressive mathematical proof system, there are more true facts within the system than there are proofs for facts, which in turn means that not every fact can be proved

No, no, no. The proof is not by cardinality. The proof is constructive. That is the point of the book, or at least a major theme of it. For example, that is the point of the record player analogy.


If I recall, the point is that if you have a record player that can play _every record_, then it can play _a record that breaks it_.

I think the piece's statement is supposed to relate to the "enumerating all deductions" bit of the Godel-numbering


The author also wrongly asserts that this metaphor is supposed to illuminate the incompleteness theorem when really it's just teasing some ideas that he explains in more detail later in the book.


> "the proof"

As if there can only be one. (And it seems OP is describing the theorem, not a proof)


It's just plain wrong to say that there are more facts in a theory than proofs of facts. Both of these sets are countably infinite. Godel's theorem proves that provable facts are a strict subset of all facts.


While I agree with you, in the context of a "hand wavey" explanation, "more" is not rigorously defined and so saying "plain wrong" is silly. A strict subset relationship is a reasonable interpretation of "more" in the context of an informal discussion.


Yes, but the use of "which in turn" suggests that was not meant; it suggests that what follows is a real deduction and not a mere rephrasing.


Leaving aside the fact that talking "informally" about infinities easily leads to wrong conclusions, the OP wasn't even talking about a strict subset relationship. The comparison was between a set of proofs and a set of facts, which are sets of different mathematical objects. So at best you can say that each proof proves a fact (a fact can have several proofs) and the set of facts that have a proof is a strict subset of all facts. Or, informally, there are more facts than facts which can be proved. Oh, wait, this just restates the theorem we're trying to prove in the first place and doesn't give any insight into why it's correct.


>It's just plain wrong to say that there are more facts in a theory than proofs of facts. Both of these sets are countably infinite.

Some infinities are larger than others. This was proved by Georg Cantor, who created set theory.


And if you read about Cantor you will find he defined a term called 'countably infinite'. No 'countably infinite' set has a higher cardinality than any other.


Like another commenter, I was in my 20s. I had no background in Math or CS when I read GEB. I was just a self taught C programmer writing door games for BBS systems. I read it cover to cover and it inflated my mind like a segmented bouncy castle. I had whole rooms to play in that I had never even realized were in there. Parts of it were slow, but I just put that down to my ignorance and plowed on. I can honestly say the book changed my life.


Similar here. I accept that it's a cartoon-y version of Big Adult Concepts, but as a teenager, it taught me what meta means.


I think there is a modern predisposition particularly in analytic, academic circles wherein criticizing a work is tantamount to somehow being "more superior" than the work itself. This article reminds me of such ivory tower conceits.

What I liked about Godel, Escher and Bach was that although it introduced some heavy concepts--it did so in a playful way. It opened many doors in relation to consideration of the world and the things in it that had been closed to me previously.

I am sure I could find some way to suggest that the work is somehow "not as good as everything thinks". I can do that with any literature but such articles are more like flame bait. What I have to ask is "Did my engaging with the work leave me having a fundamentally different worldview? Did it broaden my horizons somehow?" In the case of this work, yes, it did. Everything else is a footnote to that experience.


What's the point of, without any knowledge about him, attacking the reviewer as a poorly motivated elitist? That he disagrees with you does not mean that he is an inferior person to you.


You missed the point: sui generis, this kind of review... its a kind of thing. it belongs in a set. The observations relate to the set, the general properties of the set. Thats why we're allowed to make them. Does he know the author? no. Does the author display all the tropes which conform to members of the set? Yes.


There's a discussion in GEB about mystery/detective novels, and how the physical fact of seeing that you're near the end of the book leaks information about the state of the plot to you.

He muses - what if you inserted some number of blank pages at the end, so you weren't sure where the end was? But no, that's too easy - you might flip through and see them. What about Lorem Ipsum text? Well, that's better, but still obvious.

What you really need to do is pad out the book with text that looks as much as possible like the real book, but isn't.

And then one reads the latter parts of GEB, and you start wondering...


Chuck Palahniuk wanted one of his novels to direct the reader to a page number when the chapter ended so you wouldn't be able to tell when the book was going to end, seems like a sensible solution.


I am reminded of the Choose Your Own Adventure Series: https://en.wikipedia.org/wiki/Choose_Your_Own_Adventure


cortazar’s hopscotch is written that way


I appreciate being able to turn off the percentage progress indicator in the Kindle app for exactly this reason when reading fiction.


I notice HN makes my Chrome plugin Win7 Scrollbars invisible - perhaps for the same reason.


From what I can tell the book has no real beginning or end, but three threads each of which start and end in different places, none of which start or end at the beginning or ending of the book. That chapter seems to be the end of one of those threads as the rest of the chapter after that statement doesn't really make sense.


So, I’ve never finished Infinite Jest. My father once told me that’s okay— it ends in the middle.

Cloud Atlas employs the same structure deliberately, with wonderful narrative impact. You have to climb up the mountain in order to climb back down.

I do think GEB is the same way. The first half of the book is the climb. The second half is merely the question, what can we see from way up here?


What you really need to do is pad out the book with text that looks as much as possible like the real book, but isn't.

One of the best things I've seen in that regard (although I doubt this was the reason for it) is simply the old "preview of forthcoming novel FOO by $AUTHOR" section.


Wait what? Are you suggesting that at some point in the book the text is just padding to throw you off? I haven't read it yet, but if that is what you are suggesting then I actually want to read it now. (if not then my interest is still don't feel any pull to it at all)


Yes that's perfectly plausible :-) A lot of the book similarly makes you wonder about the book itself, what tricks are being played. That's one of the enjoyable things about it. (See my comment here: https://news.ycombinator.com/item?id=17464621)


Yes, and in fact he makes it perhaps a bit too explicit that that's what he's doing.


That seems exactly like what House of Leaves does. The actual book ends maybe not even halfway through the pages, but there are footnotes upon footnotes, and footnotes within footnotes, then a huge appendix at the end.


>There's a discussion in GEB about mystery/detective novels, and how the physical fact of seeing that you're near the end of the book leaks information about the state of the plot to you.

I noticed this when watching mystery tv shows. When I find myself wondering "wait is this the real culprit?" I just look at how much time is left till the ending...


This is implausible. The gag about having a story trail off into badly-written nonsense was used in that dialogue, but clearly doesn't refer to the book as a whole.

The last dialogue of GEB is an excellent example of something he's been doing for the whole book, aligning a dialogue with a Bach canon. The six speakers line up very well with the Ricercar a 6, including in details such as the midpoint where the conversation changes topic and drops down to only three speakers, and everyone rejoins on a new theme.

The topic of the dialogue is Hofstadter's philosophy of consciousness, the thing which he wishes more people understood (which I don't personally believe, btw), and which he wrote most of I Am A Strange Loop to emphasize. He wanted people to read and understand that part.

It also ends with a signature "strange loop" flourish, introducing the "Author" as a speaker, who begins a quotation, the content of which (as implied by the page layout and the label "Author:" at the start of the introduction) is the entire book.

If that's not the true end of GEB, what is?


The Kindle version of Leviathan Wakes pulled this off. They included an entire other book at the end as a teaser for another series. I was somewhat shocked when the actual book ended suddenly around 50% in.


YouTube videos do this sometimes. I've seen it in the StarCraft space.


This was a pre-internet book though.

The internet has made explanations hyper competitive. At this point I can find dozens -- perhaps hundreds -- of explanations of Godel's theorem.

But in the pre-internet days I could go to the library, or have my friend Kent explain it to me. (He had to explain first that languages were either complete or true -- he called it sound if I recall. soundness was the property -- and then it was demonstrated that these formal languages were sound, therefore incomplete.)

Back then you couldn't just google Godel and say "oh, that reminds me of music returning to the tonic" or whatever and then google that.

I think because a lot of the ideas have sort of spread out into culture, and been re-explained over the years, the best explanation are not in those books anymore.


“The best explanation”... That is something which you should reconsider. Few problems have a single optimal solution. And knowledge is more a matter of cumulative subjective experiences than a single event.

What is good for me is bad for you (as a matter of fact I find Gödel’s proof unmistakeaby clear).


I concede the point: the "best" explanation is best in a context.

The best explanation for a PhD candidate about to defend their thesis the next day is not the same as it is for, say, me.

But that reinforces the point I am making.


A couple of comments: 1) The author complains that the Socratic dialogs are contrived and largely consist of one character explaining something and the other saying "I see" and so on. Well, that's true, but that's kind of how the format works. You could make a similar complaint about Plato's _Republic_. 2) I think the author totally misses that the reference to not hearing the silverware during 4'33" in a cafe is a joke. Of course it wouldn't work that way.


"The author complains that the Socratic dialogs are contrived and largely consist of one character explaining something and the other saying "I see" and so on. Well, that's true, but that's kind of how the format works. You could make a similar complaint about Plato's _Republic_."

Plato is widely criticized by philosophers for doing just that. We cut him some slack, however, as he was one of the first to do so. Hofstadter, by contrast, was reusing a well-worn technique and had 2500 years of critiques of Plato and literary innovations to draw on.

Socratic dialogs do not have to be so one-sided. It's not difficult to make both sides of the dialog raise relevant, interesting, thoughtful points, and many Socratic dialogs post-Plato do this. It's sheer lazyness, incompetence, or intellectual dishonesty to make them one-sided.


(Socratic dialog as literary device should not be confused with Socratic method/questioning as a thinking device)

Socratic dialog's have different uses. On the one end it can be used to consider serious existing counterarguments and provide answers to them. One is to explain one point of view and not to provide balanced opinion.

One-sided Socratic dialog that is like a 'happy path' without hairy counterarguments falls into the latter category. Hofstadter uses them to illustrate a concepts, not to go question them.

In fact the hole book is just attempt to repeat the same ideas again and again.


This is common problem called "illiterate literary critique".

People who don't read books, or have restricted themselves into limited genres, subjects and styles are easily thrown off.

It's common to see very good writing as contrived or pretentious if it's not what you are not familiar with. Claims of elitism are used to paint actual skill and quality as negative.


It is is possible to know where the style comes from and still dislike it. E.g. I read Plato’s Republic [sic] and hate it for, among other things, its literary style. I find it artificial and, in the case of Plato, fundamentally dishonest - serving as an argument from authority (see Popper’s critique in “Open society and its enemies vol 1”).


My first GEB, I traded for a complete works of Shakespeare. The second is sitting on a shelf alongside Fluid Analogies. Thinking about the GEB phenomenon today, boggles my mind. Sure it won a Pulitzer. But it was a best seller. That's how different the world was...non-fiction could be thick and highly technical and sell in volume to the general public...it didn't need to be a new journalistic summary or popular history, it could have long passages that many readers wouldn't fully understand and puzzles that they could not solve. A non-fiction book could require the reader to work. Not vague self-improvement homework, but intellectually in the present moment of the text. It was OK for a book to be hard. GEB is a piece of genius that would never get off the ground today. Basic Books is part of a conglomerate and self-publishing isn't a replacement.


The Road to Reality by Roger Penrose was a bestseller just recently and I found that impossible to read even with a graduate degree in Physics.


This makes me feel better.

So far as I can tell, without a graduate degree in physics, that book is a giant bucket full of minced-up grad-level math, and Penrose is a pretty terrible guide to it.

On the up side it made me think I should put some time and effort into understanding the math and the ideas. That was nearly fifteen years ago now, and it's turned out to be a lot of fun as an occasional side hobby.

On the down side, he completely fails to make even relatively simple concepts either simple or accessible.

When you spend your life surrounded by some of the smartest grad students in the world, you're going to have a very unrealistic idea of what the average human can understand. Of of how to explain things to the average human in a way that makes understanding more likely.

Infuriating book - but I'm still glad he wrote it, and that I tried to read it.


Mr Penrose does not pander to anyone when he writes. You get the whole lot and that's the way I like it as a civilian. TR2R was published 2006 so not recently.

I will admit to getting a bit bamboozled quite often ...


Have you had a chance to look at the Theoretical Minimum books by Susskind, Hrabovsky, and Friedman?


No Starch Press!


“A fun illustration of something,” is pretty much as far as the book goes: it hints at grand unifying patterns, but the pattern it finds is just the abstract notion of self-reference, and then it keeps bringing it up, making a few unnecessary references, showing some pictures, and asking, “Isn’t that cool? Isn’t that weird?”

The book is a puzzle. That's it. It's not the answer to life, the universe, and everything. It's a nice puzzle that illustrates how self-referential things relate to one another. Superficially. I'm okay with that.

Because it's a fun little puzzle, it engages people's minds in new ways. This is what has caused all of the praise over the years, not because it's some post-grad-level deep-dive into epistemology or something. I think the reviewer misses the point. The criticisms they make are actually features.

I could criticize the book as well. I struggled with it. But I wouldn't criticize it along these lines. They don't look very insightful.


What's the solution?


It's a duck.

Seriously, "puzzle" in the sense that things are assembled and related to one another to create mental constructs that are pleasing to contemplate. The solution is to read the book and work the puzzle. Or it's a duck. The duck thing might work better for some of the more literal-minded, including the author of the critique.


I admit that I found the book a bit pretentious and irritating the first time I read it, but someone encouraged me to read it again, and I was unemployed at the time, and I really enjoyed it the second time. I don't know if I just didn't understand it very well the first time, or if I had just become a bit more interested in theoretical math and philosophy, making me pay better attention.

I admit that the tortoise and Achilles analogies get kind of weird, and sometimes feel a bit contrived, but I actually found them instrumental to the overarching theme of the book: self-reference.

The theme was a bit more clear in "I am a Strange Loop", but I still really enjoyed GEB, if for no other reason than it made me finally understand the concepts of "axioms" and "theorems" and whatnot.


<quote>…well, it would have been, but for some reason I kept struggling to get through it. I thought highly of it, but my secret shame was that my admiration for the book was based mostly on the first two hundred or so pages. It took slogging effort to get myself through the rest, effort sporadically applied over the course of years. I did eventually make my way through the whole thing, but even now, I can’t necessarily be sure I’ve read and absorbed every page, even though I’ve certainly looked at each one. By the end, my impression of the book was much more reserved: it does have some genuine high points and I can understand how it came to have its classic reputation, but I also felt it had a number of problems I’ve rarely seen discussed. For me personally, it ended up falling short of its reputation as a sparkling, effervescent text that drew together art, mathematics, and culture, and given how little I’ve seen this discussed, I wanted to write down why I feel this way. </quote>

That's not the point. So what if you don't get everything? Are you supposed to get everything in Knuth's The Art of Computer Programming on the first read?

The record-playing example is a wonderful metaphor for Godel's theorem, not a terrible one.


Are you supposed to get everything in Knuth's The Art of Computer Programming on the first read?

The Art of Computer Programming is a reference work so this isn't as trenchant a counterpoint as you might have thought.


GEB is just as technically deep as TAOCP.


Yes, I also felt this reading his post. I think that he is rather ignorant of the fact that the vast majority of people reading the book (it won the Pulitzer Prize, after all) WON'T have a working knowledge of Godel's theorem. He's not well-placed to judge the efficacy of the record player analogy.


You are making a simple assertion that the metaphor was good without any explanation, while the author made a deep and cutting explanation as to why he he believes it’s a terrible metaphor. You’ve made no counter argument.


Maybe Hofstadter hadn't chosen the best metaphor, but criticizing it from the point that it contains too many irrelevant details isn't unlike dismissing the use of apples for teaching counting numbers (what happens if I bite an apple, what if an apple is larger than another one and so on).


Nitpick: "…had 13 not been prime" is not meaningless whimsy--a prime number of images is harder to lay out on the page. Mentioning this in the caption without explaining it is whimsical, but like a lot of GEB I'd argue it's whimsy with a purpose: training the reader to look for applications of abstract concepts. The reader ideally goes "wait, what? There's no connection between those two things! Unless..."


Indeed. And it's a small but interesting sentence, because it serves as a counterexample to some theories of counterfactuals. To quote the Stanford Encyclopedia of Philosophy article on David Lewis (p □→ q means "if p were true then q"):

> The basic idea behind the alternative analysis was similar to that proposed by Robert Stalnaker (1968). Let's say that an A-world is simply a possible world where A is true. Stalnaker had proposed that p □→ q was true just in case the most similar p-world to the actual world is also a q-world. Lewis offered a nice graphic way of thinking about this. He proposed that we think of similarity between worlds as a kind of metric, with the worlds arranged in some large-dimensional space, and more similar worlds being closer to each other than more dissimilar worlds. Then Stalnaker's idea is that the closest p-world has to be a q-world for p □→ q to be true.

But "13 is prime" is the most central example of a necessary truth; in this example there _are_ no p-worlds. So the fact that this sentence is, in fact, meaningful shows that the way we use language is doing something different from Stalnaker's model.

And of course, the connection between counterfactuals, analogy making, and general intelligence is one of Hofstader's research interests, and he comes back to it in a later chapter. This example of his again kindof makes fun of counterfactual reasoning about mathematical objects:

> Related to this notion of slipping between closely related terms is the notion of seeing a given object as a variation on another object. An excellent example has been mentioned already-that of the "circle with three indentations", where in fact there is no circle at all. One has to be able to bend concepts, when it is appropriate. Nothing should be absolutely rigid.


I could have plenty of other critiques (and share some of OP's), but I had no problem with the record-player metaphor, I found it pretty apt.

The very physical/analog nature of a record player makes it not hard for me to imagine a record designed to break a player, one can totally imagine one whose grooves jostle the arm so much it breaks (no idea if this is actually practical, it doesn't really matter).

I wonder if it doesn't work anymore now that fewer people have experienced a record player, although the author DID say he encountered it first 20 years ago and had trouble with it then, I dunno.

> can the problem of record-player-breaking-records be suitably addressed by redesigning record-players? If no, why not? What if we simply read the information from a record using a visual device with no moving parts?

Dude, it's a metaphor/analogy, it's not that hard to realize taking it that literally takes you out of it. No analogy can ever be carried to it's end and still be a good analogy or metaphor. That's why it's an analogy, not the thing analogized. (which is a lesson that is (ironically? intentionally?) consistent with the messages of GEB? The map is not the territory.)


And if I remember correctly, in the dialog, one of the record-player-destroying-record-proof record players R^whatever does scan the record visually to see if it would break the player - but is still vulnerable to a recording of 'this record cannot be played on player R^whatever' so this point is addressed...


The treatment of Godel's theorems in GEB always struck me as gratuitous: It treats a technical result as having great philosophical import. The philosophical content of Godel's theorems is already found in the Liar paradox, "This statement is false." and incompleteness in a grand sweeping sense involving language in its entirety can be found in works as early as the Daoists and the Greeks.

Edit: I should say that this wasn't at all obvious to me when I was in college and thinking hard about incompleteness. The limitations of mathematics disturbed me deeply. These days, I think it's part of the beauty of mathematics that its limitations are its own result.


> The philosophical content of Godel's theorems is already found in the Liar paradox, "This statement is false."

Sort of. Common reactions to the English version of the liar paradox are along the lines of 'that's meaningless so it doesn't imply anything' or 'so natural language is inconsistent, what do you expect?' Most of the work Gödel did was to find a way to encode meta-logic into formal language in such a way that he could then state the liar paradox in a format so ironclad that it could no longer be dismissed.


Sure, that's the technical content of Godel :) If people chose not to take the Liar paradox seriously, I think they would gnash their teeth just as vigorously at the incompleteness theorem, and there is an expansive graveyard of attempts to circumvent it.


In my opinion the most important bit of Gödel's results is Tarski's Undefinability, an essential corollary that is often completely omitted when talking about incompleteness. GEB covers this (on p580 in my copy) and talks a bit about the consequences.

The other important part, of course, is that these results are inescapable.


> it hints at grand unifying patterns, but the pattern it finds is just the abstract notion of self-reference, and then it keeps bringing it up, making a few unnecessary references, showing some pictures, and asking, “Isn’t that cool? Isn’t that weird?”

That is pretty much what I thought of the book the first time I tried to read it but didn't get all the way through. The book, especially in the beginning, was super fun to read but it lost its charm for me after the first third.

One huge benefit to me from the book is a nice introduction to Bach's music. To this day I enjoy finding patterns in music.


GEB is a book – like TAOCP – that makes me feel inadequate.

I "know" I "should" work through it. I "know" I "should" enjoy it.

But I never follow through. This review gives me hope that maybe I'm not just a ludicrous failure, but that there are different ways to go.

Obviously, I do believe that, and I don't believe I'm a total failure, but still, I can't deny the nagging voice in the back of my head sometimes.


Second time in this thread I see someone mention GEB in the same breath as TAOCP. I'm curious how they ended up occupying the same kind of mental space for you and apparently others. In the immortal words of Jules Winnfield, it "ain't the same fuckin' ballpark, it ain't the same league, it ain't even the same fuckin' sport". Not that you are under any obligation to read either and plenty of very good arguments can be made you're better off not-reading at least one of them.


In the same way as both chocolate ice cream and riding a rollercoaster make me happy, but are totally different. :-)


Right but do you feel you have to read, to pick a slightly less fraught example, the white algorithms book? Or Patterson and Hennessy? Like TAOCP, those work perfectly well skimmed or read in bits and parked on your bookshelf or in your bathroom.

GEB (or Moby Dick) are intended to be read in entirety.


I assume you mean CLRS when you say "the white algorithms book" - if so, I'm not sure I necessarily agree that any of these books are meant as reference works or "not intended to be read entirely". They're textbooks, of course, so there's something of an expectation that you'll read them over the course of months or years, but I do believe that all three of your examples are _meant_ to be read cover to cover rather than used as a reference book like a dictionary or a thesaurus or even a cookbook; if you pick up any of these books and start at a random chapter (much less a random section) you're likely to be lost. You might use them as references once you've finished them, but you are expected to start on page 1 and finish on page 1,892 (or whatever insane page count these books have now).


This 'reference' business really seems to stick in people's craws for some reason. I don't mean it disparagingly nor do I feel 'reference' is limited to the style of cookbooks and dictionaries. Nobody writes a multi-volume (or monolith-sized) technical work without the intent it also be used as reference. The work might have other purposes and even a different stated primary purpose - that doesn't make it not-a-reference.


> plenty of very good arguments can be made you're better off not-reading at least one of them.

I'm extremely confused as to why this would be the case for TAOCP.

While I wouldn't recommend TAOCP for "just reading", I don't know why it wouldn't be recommended for study. Knuth provides a very interesting approach for algorithms analysis, deep in substance.


Let's say, like many people, you enjoy the occasional article from The Onion. You can very easily continue enjoying them without reading Animal Farm, Gulliver's Travels or the collected works of Aristophanes.

TAOCP, to me, is in a similar category - it's worth remembering Knuth essentially carved the field of study out himself. If you need a reference or textbook, there are by now better places to start. If you are interested in the field, you should definitely get a hold of TAOCP as well. But you can be a perfectly fine and educated programmer (let alone person) and never touch the thing.


I completely disagree that TAOCP is a "reference work" or that invariably there are "by now better places to start". If you care to learn about any of the topics in TAOCP, to the level of detail it contains, it is usually one of the best places to read. Knuth has taken the entire mass of published literature on the topic, passed it through his "interestingness" filter, and boiled it down to usually the best writing on the topic.

For example, suppose you want to learn about Binary Decision Diagrams. I have before me TAOCP Volume 4A, which has 57 pages of text, 22 pages of exercises, 58 pages of answers, and a wealth of references. (Section 7.1.4, you can see the draft Fascicle 1B online: http://www.cs.utsa.edu/~wagner/knuth/) There are simply no other books or papers where you can learn so much about BDDs and ZDDs this well or easily. The writing is crisp and clear, and is really intended to teach, not to be a reference to look up.

Of course, a lot of people have simply no need to learn about zero-suppressed binary decision diagrams, so they can just simply not read TAOCP. But that's a different claim from saying that it's a reference work or not a good read on its topics.


not a good read on its topics.

That is a different claim and also one I didn't make.


Fair enough. BTW I do agree with your original comment that started this thread, that GEB and TAOCP are nothing alike. I greatly enjoy the experience of reading both of them, but for very different reasons.


I agree that nobody needs to read TAOCP (or, really, even any equivalent textbook) to be a good programmer.

I disagree with how you've characterized it. I'm unlikely to ever read all of TAOCP and, yes, the reason for that is that doing so would feel a little bit like a commitment to read the whole OED from cover to cover.

But in fact TAOCP isn't really a reference book and it is pretty rewarding, in a straightforward, professional sense. I dip in and out of it sort of at random and I can think of several times when doing so has made me sharper, or when I directly took something from TAOCP and ended up applying it.

My takeaway from this is that TAOCP is a super weird book, one we can't really put our finger on to characterize.


It's written much better than any reference book has any business being written. It's also very much intended to be both a survey and a reference work, among other things. But at this point, we've shaved the yak to a smooth shine and are carefully splitting the yak-hairs one by one. GEB ain't no TAOCP.


I see what you mean.

I'll be frank, I think that Knuth's analyses have surprisingly fresh and useful approaches, particularly compared with the other algorithms texts I've read. The usual texts are better at compressed results and a "Straight path" through, but as ways of thinking, I find Knuth more useful to study.

I wouldn't recommend TAOCP to anyone not interested in algorithms, mind.


Oh, I'm an unrepentant TAOCP and Knuth fanboy. TAOCP is undoubtedly a Great Work™. I'm just bothered by the notion that someone would feel bad or inadequate for not having read it.

GEB, on the other hand is... I dunno, an ok monitor riser if you don't have anything else handy.


That's what Cormen/Rivest is for. Much sturdier, too.


You're going to regret that choice when ED-209 breaks into your office and demands a precise definition of big theta within 20 seconds.


For those wondering: The Art Of Computer Programming by Donald Knuth



Only as a child, a German translation of one of his books with Knaves and Knights. Also Gardner.

It impressed me, but was too difficult for me back then.


> The dialogues are a big part of why I originally struggled with the book: they are meandering and long, and they regularly outstay their welcome.

I would disagree. I found the dialogues absolutely beautiful and a big part of what makes this book so enjoyable.

I think it would be a mistake to treat this book as a textbook whose goal is to convey some material in the most efficient way.


I still like GEB, which I first read as a teenager. But later, when I read Hofstadter's "Le Ton beau de Marot", I was struck by similar thoughts about Hofstadter's non-mathematical ideas as this review hinted at:

His tastes in music, poetry, etc, are very conservative and rather conventional, he doesn't seem to make any effort to engage seriously with any modern work, and he tries to pass this off as edgy sophistication.

After the 50th repetition of "they were playing this awful Rock Music and my brilliant friend X remarked on how awful it was", it gets a bit old.


Personally, I enjoyed the whimsy and the references in GEB. I considered the weak references past the author's knowledge to be a nod and a pointer to go make a bibliographic romp through other material as one wished. It's an inspiration along the discussion of its ideas, but is not complete. That incompleteness may be because it so deeply references itself, hence the book itself being a funny metaphor.

As directly useful reading to get someone more acquainted with a broad discussion of algorithms and logic without textbook depth of individual topics, I'd recommend _The_New_Turing_Omnibus_ by Alexander Dewdney and _The_Advent_of_the_Algorithm_ by David Berlinski. Probably the latter and then the former, actually.

I wouldn't call GEB a bad read. I think because of its high reputation people expect too much from it, like it should be the definitive book of truth about logic and proofs. It's one book, and if one reads it in the light of it being a fun romp through topics that are somehow loosely connected in the vein of James Burke I think it delivers rather well.


"I realized that I wanted to like the book much more than I actually liked it in practice."

This. Maturing has moved me past books that I "should" like. Communication is important! If it takes you 500 pages of hot air to explain something simple, where will there ever be room for a complex idea? For nuance?

It's embarrassing how a ten minute CPG Grey can be better than many books I've read.


I actually tend to agree with this review - the record-player analogy for incompleteness is particularly bad. But what you have here is a young man high on his intellectual gifts playing in beautiful abstract worlds, imperfectly, sometimes pointlessly. It's that intellectual playfulness that makes this book special, since books are usually either intellectual or playful but rarely both.

Not to change the subject, but Frank Herbert's work has a similarly undeserved reputation for depth, IMHO. The Dune series wants to be about so many interesting, connected things: consciousness, religion, ecology, psychology, politics and speculates about (non-scientific) connections between mind and matter. Herbert had the good taste to not describe every detail about the world, but all of his considerable artifice is a very thin veneer on what is ultimately a juvenile power fantasy. There is really nothing there, philosophically speaking, even though, to my young teenage self, it really felt like there was.


Can you give an example of fiction in which there is "something there"?


Yes, I can. Actually Herbert himself did have a good (taboo) thought in Dune with his Golden Path idea, that to save humanity it must endure a tyrant for thousands of years. Asimov's Foundation series had a lot to say about the natural ebb and flow of civilization on a long time scale, a very Buddhist view. He also had a lot to say about the nature of general AI, and the relationship to such beings with humans, in his Robot novels (I'm most familiar with the Daniel R Olivaw novels, e.g. Caves of Steel). Of course, when it comes to society, politics, and technology, Neal Stephenson, in both Snowcrash and The Diamond Age, has a lot of great philosophical ideas. Like a good author he shows, and doesn't tell us about his ideas. The character Raven in Snowcrash, for example, expresses the now pretty common understanding that individuals, armed with technology, can challenge states (e.g. bin Laden). There are lots and lots of other authors and ideas.


Has anyone read it and not found the dialogue bits meandering and superfluous? I found the other chapters acceptable but the useless dialogue left me feeling the whole book was pretentious and babbling.


I loved the dialogues; they were really helpful in explaining some of the more technical content and I loved the little puzzles and devices he scattered throughout them. The use of metaphoric exposition followed by technical description is brilliant for making reinforcement learning fun.


Sure. I love the dialogues, they are by far my favorite part of the book.


The dialogues were the only parts I could make sense of the first few times I read the book (at a young age). I greatly enjoyed them and found them both amusing and eye-opening; even more so when I could finally start making sense of the prose that accompanied them.


For me this book was a companion to Metamagical Themas, a collection of Hofstadter’s Scientific American articles. Being a collection of articles, that book is much more accessible.


> I’m familiar with how record-players work, but I have never played a record designed to break a record-player!

If I recall correctly, at some point there were copyright protection measures on some audio CDs got so aggressive that they would actually brick your CD-ROM drive if you were foolish enough not to have autorun disabled (I'm not even talking about the Sony Rootkit scandal). That's a quarter century after GEB was written though.


I'm so glad I read this. I've tried to finish the book au least three times over the last 15-20 years and never made it much beyond page 200. I loved the prose sections but had a hard time getting through the Achilles & Turtoise sections. Some of them were super long and I always felt like they were clearly trying to convey some important, deep message that I'm too stupid to understand. Despite not enjoying them I actually reread some sections because I thought it must be me not getting it, although I had no problem understanding the actual explanation. I'm so happy to learn that I can skip the silly stories and dialogs without missing the value of the book. Maybe now I'll finally finish the book?!


The author's main points are:

* The dialogs are overlong and obtuse * Some specific metaphors don't click * The author attacks various things which he feels Hofstadter belittles (avant-garde music, Zen)

He may be right about the dialogs, though as others have said perhaps it's better to read the chapters backwards. Or, even, metaphorically like the Crab Canon: dialog, chapter, and then dialog again.

He's clearly projecting his own perspective without thinking about the perspective of others when it comes to the metaphors. This won a Pulitzer, not everyone has a mathematical background and can straight up understand Godel when directly explained.

But it's the third part that is most troubling. These days it's fashionable to take some earlier cultural artefact, and assassinate it on the basis that every point of view is valid, genuine, and can't be criticised. I smell that same pattern in his counter-attacks on "avant-garde social commentary masquerading as music" and Zen. Hofstadter has an opinion on these. A strong, negative opinion. Attacking Hofstadter the way he does to me more smacks of a post-hoc character assassination than intelligent engagement with the material.

As an aside: I read Hofstadter (somewhere) say that he has spent a lot of his life after GEB trying to hammer home that it was about self-reference, self-reference, and nothing but self-reference. I think he even described "I am a Strange Loop" as being GEB but much more direct (I still need to read that one).


From the classic MIT-AI Lab humor directory: AI:HUMOR;GEB REVIEW

(I'm including a copy for reference and archiving since the web server publishing this link is written in MACLISP running under ITS on a PDP-10 emulator.)

http://up.update.uu.se/DSK%3AHUMOR%3BGEB%20REVIEW

The following review of "Godel, Escher, Bach: An Eternal Golden Braid," Douglas R. Hofstadter, New York: Basic Books, appeared in "Fusion", Magazine of the Fusion Energy Foundation, Oct. 1979, pp. 61.

"Douglas Hofstadter, author of "Godel, Escher, Bach," has not had such a vaired experience with the antiscience movement as Bateson [Bateson, Gregory, "Mind and Nature--A Necessary Unity", preceding review in Fusion], but his brief career, nevertheless, is a clue to the message of his book.

Hofstadter is a computer expert in the field of artificial intelligence. This dismal discipline, which emanates from the Bertrand Russell-Karl Korsch networks, has been used primarily to develop brainwashing programs. Hofstadter claims to be part of his network through his close association with Marvin Minsky who, in turn, works closely with linguistician Noam Chomsky at the Massachusetts Institute of Technology. Politically these "artificial intelligence" academics link up to the Bateson circles through the various radical groups they mutually support.

Artificial intelligence is as nasty a discipline as its use in brainwashing implies. It is based on the premise that the operations of the human mind are essentially compatible with formal Aristotelian logic and thus can be replicated by a sufficiently complex computer.

......

Douglas Hofstadter's interminable driveling (777 pages) reiterates Bateson's point from the perspective on an attack on Kurt Godel's 1931 proof that any system determined by a fixed lawfulness (axiomatic login) is necessarily incomplete, hence incapable of solving problems that can be posed within its limits. The obvious conclusion to be reached from this proof is that there is a higher order of lawfulness (reason) that determines successive, reason-determined locally lawful systems. The British oligarchy never forgave Godel for this insight, which negate Bertrand Russell's attempted destruction of Georg Cantor's introduction of the concept of the transfinite into mathematics.

Hofstadter simultaneously slanders Godel and the musical genius Johann Sebasian Bach--whose recognition of the same principle in musical composition made Beethoven's subsequent breakthroughs possible--by lumping them with the psychotic Dutch draftsman M.C. Escher.

The paradoxes of formal logic, Hofstadter contends--for example, Epimenides's statement that all Cretans are liars--are really Zen koans. There is nothing new here that the eastern mystics and their systematized irrationality did not discover in bygone millennia. In fact, he says, the solution is to imbed simple axiomatic systems in more complex ones in regress. Once this is accomplished, presto, mind and the universe can be programmed into a computer.

(Reviewed by: John Schoonover)


I also found the book to be pretentious and full of nonsense. It gave the impression something profound was being said, but was mostly poetic license that might be charming to some, but I found to be irritating since it seemed to confuse rather than clarify.


Tortoise: Right—who wants to hear the racket of clinking dishes and jangling silverware?

OP completely missed the tongue-in-cheek reference to silence here.


I first bought GEB in the early 80's. At that point, I had only a year in college and I made little headway with the book. Went back to college and ended up double-majoring in Japanese and Computer Science and took a bunch of math courses. Have now spent something like 30 yrs as a software engineer and, in particular, I've worked with many interesting people including one mathematical logician who taught me about such as Alonzo Church and the history of functional programming languages.

So picked up GEB again a couple of years ago and found it much easier going. But then put it down again because most of the ideas, in addition to being very verbosely explained, are now old and there really wasn't a whole lot new there aside from elaborate explanation.

I imagine that the book served its purpose in its time although if not Hofstadter, someone else would probably have come along and put together the various ideas.


Maybe he read it too old. I read it in high school and it absolutely blew my mind. It was a slog at parts, predicate calculus and symbolic manipulation like that was new to me, but I can say without qualification that it changed my thinking and life.


A metaphorical fugue on minds and machines in the spirit of Lewis Carroll

The dialogues are supposed to be whimsical and bizarre dream-like riffs on the topic at hand, a la Alice in Wonderland, not sharp teaching instruments.

I think Hofstadter did get the supposed point of Cage’s piece, or why would he have joked about whether people want to hear ambient noise or not?

The review gets good in the middle when it starts talking about Zen, and I agree with some of the conclusions towards the end. As another commenter pointed out, Hofstadter was young. In a way, the whole book is like an amateur piece of music or film or poetry, at times clumsy or campy but generally endearing. I understood this even as a kid.


I had much the same reaction to GEB as this author: it seems like I really should have liked it, but never have. To me, it always just seemed too clever by a half (or, perhaps, I am just too dull witted by some other fraction ...)


I haven't read GEB yet, but I had a similar experience reading "I Am A Strange Loop." It was too prosy, too long, too many examples and too obvious for anyone that has thought about these things before. It didn't go deep enough or propose mechanisms for anything. If you've never thought about any of these things before you might find it cool, otherwise it's a verbose rehash of things that have been said in better ways.

https://altreason.com/i-am-a-strange-loop/


Does anyone have a better metaphor, or a link to someplace with a better explanation, of the Incompleteness Theorem?


Excerpted from http://www.flownet.com/ron/chaitin.html:

Proof of the undecidability of the halting problem in six sentences:

Assume that there exists a program H that solves the halting problem, that is, H takes two input parameters, a program P and an input I and returns true if P halts when run on input I, false otherwise.

Construct a program B that does the following: B accepts as input a program P and computes H(P,P), that is, it calls H as a subroutine to determine if program P halts when run on a copy of itself as the input. If the result from H is true then B enters an infinite loop, otherwise B halts (that is, B does the opposite of what P would do when run on a copy of itself).

Now consider what happens if we run B on a copy of itself. B will halt iff H(B,B) is false. But H(B,B) is false iff B doesn't halt, which is a contradiction. Therefore, our assumption is false, and H is impossible. QED.

Turing's original proof of this theorem was remarkable (and complicated) because in his day there were no computers. Not only did Turing have to more or less invent the idea of a program, but he had to conceive of the rather mind-bending idea of running a program with a copy of itself as input. Nowadays this is commonplace. The unix command "cat `which cat`" is one of the simplest examples of a program operating on itself as input. A compiler that compiles itself is another.


It proves, that it is impossible to design any formal system, within which it would be possible to prove every principle/aspect that applied to that same system ( aspects that nevertheless WERE true, and CAN be proven by using a 'bigger hammer'.) In a way, the problem is, that whenever you expand your system to make it 'more complete' (to be able to prove more stuff), you'll invariably introduce new aspects you THEN can't cover. You "can't pull yourself up by your own hair".


The incompleteness theorem says that if your logical system is powerful enough to describe the natural numbers (Peano's axioms) then it is also powerful enough to write down self referential paradoxes, like "this theorem cannot be proved". This means that your system either is unsound (if you can prove the paradoxical sentence then you are simultaneously proving it to be true and false) or that it is incomplete (some sentences, including the paradoxical ones, cannot be proved). You can't be sound and complete at the same time in the presence of self-referential paradoxes.

The proof of Godel's theorem involves showing that you can encode theorems and their proofs using just integer arithmetic. For example, you can use tricks like encoding the string "ab" as 2ᵃ3ᵇ.

The incompleteness theorem is very closely related to Turing's undecidability theorem, which shows that by encoding programs as strings and passing them as inputs to other programs you can create paradoxical self-referential problem statements that no program can solve in finite time. It also reminds me of the lambda calculus, which is something that really shows how far you can go by starting out with a barebones system and encoding everything else on top of it. The lambda calculus starts with only anonmour functions and is able to encode numbers, arithmetic, data structures, iteration and everything else with just that. Godel's theorem uses similar tricks to show that if you start with just integer arithmetic you can also go all the way to "turing completeness".


Here's a pretty good reddit discussion about it. That's the best I could find with a quick google.

https://www.reddit.com/r/rational/comments/30144c/geb_discus...


https://www.amazon.com/Gödels-Proof-Ernest-Nagel/dp/08147583...

Can be read in a couple of hours and you'll know as much about Goedel's work as any GEB reader. Nowadays even comes with a Hofstadter intro but no bulk or talking animals.


I'm not familiar with directly, but if you're more familiar with comp-sci: based on my understanding of Curry-Howard correspondence, I would think it's the logic equivalent of the halting problem. I could be way off-base though, I'd appreciate a mathematician chiming in on whether my intuition is accurate here.


(Caveat: I'm a computer scientist, not a mathematician)

The halting problem, and universal computation itself, came directly out of work on incompleteness. Incompleteness is about being unable to prove statements in a particular system; it leaves open the possibility that a more powerful system might be able to prove that statement (but that more powerful system will have its own incompleteness, and so on).

Turing got involved after attending a series of lectures on Goedel's incompeteness theorem. He came up with turing machines, showed that they cannot decide their own halting problem, and proved that they're equivalent to lambda calculus. Goedel tried to make a more powerful alternative to lambda calculus, called general recursive functions, but it also turned out to be equivalent to turing machines and lambda calculus.

This lead to the Church-Turing thesis, which claims that any effective/physical/realisable system of logic cannot be more powerful than turing machines/lambda calculus/general recursive functions. In this sense, we can still "solve" the halting problem by using a more powerful logical system (e.g. oracle machines), but the Church Turing thesis says that we can never carry out those calculation to get a true/false answer.

I like to think of incompleteness as a mathematical property, applying to all systems but allowing the "escape hatch" of switching to a more powerful system; and I like to think of the Church Turing thesis as a law of physics, that there is a limit to the logical power of any physical system (i.e. Turing completeness).

There's a nice summary at https://www.youtube.com/watch?v=GnpcMCW0RUA


Being like a law of physics makes the Church-Turing thesis really unique in computer science. Most of the things we work with are theorems but the Church-Turing thesis is distinctly non-provable. The evidence we have for it is that no one ever managed to observe a counterexample.


the question that fascinates me is - what is the property of a program that makes it undecidable? I've been playing around with the idea of trying to make a program that determines halts/doesn't halt/don't know, given some representation of a program. I'm running into some interesting implications when I incorporate dependent typing, but it feels like there must be something theoretical that's already out there.


> what is the property of a program that makes it undecidable?

That is itself an undecidable question!

There are a lot of programs for which you can tell that they halt (programs with no loops, for example). Likewise there are a lot of programs for which you can tell that they do not halt (any program whose main body is a WHILE TRUE loop in a language that does not have exceptions, for example.) And then there are lots of more complicated cases for which the answer can be found. But there are some cases where we just don't know. The best known is this:

https://en.wikipedia.org/wiki/Collatz_conjecture

Not only do we not know the answer, but we don't know if it is even possible to find the answer! We know (because Turing) that there exist programs for which we cannot find the answer, but we don't know whether the Collatz problem is one of them. And in general we cannot know, because if we could then we could use that knowledge to solve the halting problem itself! (We might be able to prove undecidability for particular programs, just as we can prove halting or non-halting for particular programs, but deciding undecidability for any particular program is in general undecidable!)


but that's exactly my point - there are clear-cut halting, clear-cut non-halting, and "we don't know yet" programs. My interest is in delineating the properties that separate the 3. The collatz conjecture is exactly the example I use when explaining the "we don't know yet" programs. If I could build a logic system that can positively prove halting, and one that can positively prove non-halting, then anything that remains is "dunno".

The halting problem is basically showing that you can never prove the category of "dunno" to be empty. Which is pretty self-evident given there are unsolved bounding problems like the collatz conjecture in mathematics.

For a minimal example, if I create a program that takes an AST and runs it through two truth functions, one that checks if the AST === "return;" and prints "it halts" and one that checks if the AST === "while(true) {}" and prints "it never halts", then we've already created such an application, just with a "dunno" space containing the majority of programs. Adding practical axioms to a logic system could grow the "halts" and "never halts" spaces to the point where only a subset of applications cannot be excluded from the "dunno" space, and my question is: what are the unique properties of those applications?


> the properties that separate the 3

But that is exactly my point: what separates these three classes is the state of our current mathematical knowledge, and that obviously changes over time.

> Which is pretty self-evident

No, that is not at all self-evident. (If it were, someone would have proven it long before Turing, and he wouldn't be famous for it.) I don't know what you mean by a "bounding" problem, but we can prove a lot of things about programs with a structure similar to Collatz. For example, we can prove that a program that searches for a prime larger than its input will always halt for any input.


I think op's problem is different than the one normally studied - he wants to design a halting program and find the class of programs for which it works.

This is typically not very popular as these statements depend a lot on the program, and the representations of programs unlike the halting problem computability statement.


Yes, I understand that. And what I'm saying is: you can't do that. The halting problem is formally equivalent to proving arbitrary mathematical theorems. For every program, there is a corresponding theorem which is true IFF the program halts, and for every theorem there is a corresponding program that halts IFF the theorem is true. So the question: for what programs can the halting problem be decided? is formally equivalent to the question: what theorems can we prove? And we can't prove any useful theorems about that because if we could, we could solve the halting problem!

[UPDATE] I want to revise my claim that if we could prove interesting theorems about which programs have decidable halting properties that we could solve the halting problem itself. That's not true, because the word "interesting" is too poorly defined. But what is true is that if we could do this then we could prove theorems that we could not previously prove (that has to be true for any reasonable definition of "interesting"). Furthermore, we could now prove these theorems in a purely mechanistic way with no need for human creativity. By any stretch of the imagination that would be a major breakthrough.


>> he wants to design a halting program and find the class of programs for which it works.

> Yes, I understand that. And what I'm saying is: you can't do that.

Not sure if it's relevant, but it's certainly possible to define an approximate halting-problem-decider, then figure out what it does and doesn't work for.

For example, a trivial implementation which checks if the given program is equal to `42`, outputs "halts" if it is, outputs "don't know" if it isn't. The space of programs it works for is { `42` }, the space of programs it doesn't work for is `remove(42, enumerateAllPrograms)`


Thank you, this is the space I'm wanting to work in - maximising the number of programs that can be decided. I'm just not sure what kind of logic I could use for such a system.


it's self-evident _at this point in time_, given you could feed such a machine the collatz conjecture as an algorithm and we just don't know if any arbitrary input will halt. I'm not claiming it is self-evident in perpetuity. By bounding problem I guess I mean the mathematical equivalent of knowing it can halt. I'm not a mathematician as I previously stated so I don't know the terminology.

I don't think the space I'm talking about is equivalent to the state of all of our mathematical knowledge, as AFAIK not all problems relate directly to this space. But a subset of all mathematics, sure (perhaps computability, or logic in general although of a higher kind than first-order). My interest is in creating a static analyser based on the set of axioms or deductions that can be derived from that space.


> it's self-evident _at this point in time_

No, it's not. Just because you and I don't know the answer is not proof that no one knows the answer. Someone may have discovered the answer in the last five minutes. (In fact, for any mathematical discovery there is always necessarily a period time when only one person knows the answer.)

> AFAIK not all problems relate directly to this space

And I am telling you that you are wrong about that. If you think that there is anything about computation that does not relate to math then you have a deep misunderstanding of either computation or math (possibly both).


alright, if you're going to resort to nitpicking and being wilfully insulting then I see there's not much point discussing this further with you.


Let me rephrase that to hopefully make it sound a little less insulting: if you are able to identify an aspect of computation that cannot be described mathematically then you will be remembered alongside Turing, Godel and von Neuman as one of the greatest thinkers humanity has ever produced. (But a priori, the odds of your being able to do that seem pretty low to me.)


well you've just gone and been inaccurate yourself - I said that I didn't believe all mathematical discoveries had an impact on computability, but you just suggested I was saying that not all computation is based in mathematics. Which is absurd and it's clear you think I'm stupid, where to me it's pretty clear you are misinterpreting what I'm saying, whether that's poor communication on my part or uncharitable interpretation on yours.

I've been friends with academics before who had an unyielding desire to interpret lay speak as an attempt to be rigorous and then criticise it as such, as you have done with me saying "self-evident", but surely you realise I'm not trying to say "self-evident" in some kind of rigorous mathematical sense? I mean if you're just trying to be obtuse it's working well but it comes off like you're trying to hold my conversational and largely intuition-based ideas to a much higher standard than I am trying to meet. I don't understand the impulse at all. If I were trying to be totally rigorous with my thoughts rather than just conversationally trying to communicate an idea that is still taking shape, I would be learning the correct syntax to write a paper and sharing that instead.


> I said that I didn't believe all mathematical discoveries had an impact on computability, but you just suggested I was saying that not all computation is based in mathematics. Which is absurd

No, that's not what I said. What I said was that this statement:

"I don't think the space I'm talking about is equivalent to the state of all of our mathematical knowledge, as AFAIK not all problems relate directly to this space."

is false. This is not a matter of opinion. There is a formal equivalence between math and computability. For every theorem, there is a corresponding program that halts IFF the theorem is true, and for every program, there is a corresponding theorem that is true IFF the program halts. Therefore, it is necessarily the case that all mathematical discoveries have an impact on our knowledge of computability because there is in fact no distinction between math and computability. They are the exact same field of study, just with different window-dressing.

> and it's clear you think I'm stupid

No, I don't think you're stupid, merely ignorant (and a bit defensive). And this has nothing to do with your lack of rigor, it has to do with your intuition that there might be something worthwhile to be discovered by trying to examine the structure of known halting and non-halting programs. This is a well understood problem. It is possible that you could discover something that everyone else has overlooked. But if you do, it would be a breakthrough of the highest order, a major revolution in both computability theory and mathematics.


If you think I'm trying to discover something then you've completely missed my point. I was never looking to discover some new concept, but to find an existing model of looking at computability and implement it as software. I know its possible to decide computability for at least some subset of programs because we can do it by eye, I'm just looking to do the same thing programmatically in a way that is theoretically sound.


Yes, I understand that. And what I'm trying to tell you is that this problem you have set for yourself is formally equivalent to the problem proving mathematical theorems. They are the same problem. Any substantial progress on one is necessarily also substantial progress on the other.


not quite - I'm trying to implement existing proofs into an axiomatic system. All unproven theorems can be classified as the "don't know" outcome. But I don't know what approach to use to implement a system that can look at "f(x: int) => x is cleanly divisible by 2 ? f(x + 1) : halt" and tell me that this recursive function all halt under all valid inputs. Because that would require that it be able to derive such a fact from a core understanding of what addition and division mean.


> that would require that it be able to derive such a fact from a core understanding of what addition and division mean

Yes, and that's the part that's not possible in general, only for specific cases. For example, you can prove that:

f(P: int -> boolean, x: int) => P(x) ? f(x + 1) : halt

will halt for any x IFF there are an infinite number of integers that have the property P, so f will halt if P is "is an even integer" because there are an infinite number of even integers (which, obviously, you can also prove).

And with that in mind, you might want to go take another look at:

https://news.ycombinator.com/item?id=17465542


> My interest is in creating a static analyser based on the set of axioms or deductions that can be derived from that space.

From the sound of it, I think this might be a little backwards: axioms and deductions give rise to a space of possible proofs, and those can be used for things like static analysis. One example of going the other way is "reverse mathematics", where we try to find a set of axioms that give rise to some statement. I'm not sure if that could be applied to a (perhaps informal) "space" of theorems rather than some particular theorem.


you're right, my phrasing wasn't ideal there. I'll try to clarify:

what I want to do is to create a static analyser based on a set of axioms/deductions that can determine which of the "always halts/does not always halt/don't know" categories a program fits into. (I'm rephrasing the categories because a program like "(x: int) => x == 1 ? halt : loop" fits into the second category).

What I'm wanting to figure out is the mathematical field in which these types of axioms reside. My naive assumption was that computability theory is the answer but in researching the various branches of similar fields it seems to me that there is a lot of crossover and similarities between concepts in different fields. So I wasn't sure if there was a field that covers what I would describe as "given a sequence of transformations of typed values, representing those transformations as relationships between inputs and an output, can we definitively prove that the program halts, or does not always halt".

For example, given the function "f(x: int) => x is even ? f(x + 1) : halt" I can know from looking at it that it will halt on any valid input because for any int x, recursing on x + 1 has to reach a value where x is not even. I just don't know how I would formalise such knowledge into a system that can actually deduce such conclusions as you could clearly change "x is even" to any "x is wholly divisible by <integer>" and the conclusion would hold.


> you could clearly change "x is even" to any "x is wholly divisible by <integer>"

Or...

"x is a prime number"

"x is a counterexample to the Goldbach conjecture"

"x is an encoding of a planar map that cannot be four-colored"

"x is the encoding of an algorithm that solves the traveling salesman problem in polynomial time"

"x is the encoding of a formal proof of the Riemann hypothesis"


For which the answer from the program is "I don't know".


Really? Even for "x is a prime number"?


How would you even express that as a piece of code? I'm not talking about magic blackbox functions.


Seriously? You don't know how to write a function that tests if a number is a prime?


of course I do - I misspoke after I missed your reference to prime numbers in the list of open/hard problems. I get why you're including it, but if we're just deciding whether the program halts then "f(x: int) => x is prime ? halt : loop" is clearly resolvable to "doesn't always halt" so I don't see your point.


> if we're just deciding whether the program halts then "f(x: int) => x is prime ? halt : loop" is clearly resolvable to "doesn't always halt"

Yes, that's right. But that was not your original example. Your original example was:

    f(x: int) => x is even ? f(x + 1) : halt
which is a much more interesting case.

But the real point (as I am now repeating for the third time) is that the halting problem is equivalent to proving arbitrary theorems. "f(x: int) => x is prime ? halt : loop" is equivalent to the (very uninteresting) theorem "X is prime" for whatever X you happen to put in. Your original example is equivalent to "there are an infinite number of even numbers", which is slightly more interesting. Replace "even" with "prime" and it gets even more interesting. Replace "prime" with one of the other examples, then it gets more interesting still.


yeah I'm aware that I'm trying to build a platform that proves arbitrary theorems. I think where we're butting heads is that I'm wanting to approach such a platform from a constructive angle - starting with simple axioms and building up from there. The initial system may only be capable of interpreting natural number equality and addition.

My original question was around what theoretical approach best matches that goal - whether it's an axiomatic system based on Peano arithmetic, or a natural deductive system, or something based on type theory. From a non-academic perspective these fields all seem very similar and there seems to be a lot of crossover so I didn't know what best suited the problem I'm trying to solve. Of course even when the system is complete some functions will remain undecidable, but I'm totally OK with that. It may even be that something like a deductive classifier better suits the problem than a mathematical representation.


> I'm aware that I'm trying to build a platform that proves arbitrary theorems.

OK, well, that was far from clear. Here's your original question:

> the question that fascinates me is - what is the property of a program that makes it undecidable? I've been playing around with the idea of trying to make a program that determines halts/doesn't halt/don't know, given some representation of a program. I'm running into some interesting implications when I incorporate dependent typing, but it feels like there must be something theoretical that's already out there.

You're right, there is "something theoretical that's already out there", and that "something theoretical" is Turing-equivalence (and a vast literature on automated theorem proving).

> My original question was around what theoretical approach best matches that goal - whether it's an axiomatic system based on Peano arithmetic, or a natural deductive system, or something based on type theory. From a non-academic perspective these fields all seem very similar and there seems to be a lot of crossover so I didn't know what best suited the problem I'm trying to solve.

You're right. All of those fields are very similar. In fact, they are equivalent because they are all Turing-complete. It is really hard to avoid Turing-completeness, and even there all of the possible options are well understood.

The bottom line is that you (AFAICT) are at the very beginning of an extremely well-worn path. Actually, not just one path, but many, many paths, all of which lead to the same place: undecidability (and meta-undecidability and meta-meta-undecidability...) No on has found a breakthrough in these woods, and there are good reasons to believe there is none to be found (though of course that can't be proved because undecidability!) The best you can hope for is to make reasonable engineering tradeoffs that work for some domain of interest. But at the end of the day, you're just doing math, and math is hard. Nowadays it's really, really hard because all the low-lying fruit was picked a long, long time ago.

But if you're really serious about this, the first step is to get a stack of textbooks on computability theory and automated theorem proving and read them.


> OK, well, that was far from clear. Here's your original question:

Yeah, sorry about that - not being an academic myself makes it hard to put it across in the correct way.

> No on has found a breakthrough in these woods, and there are good reasons to believe there is none to be found

That's absolutely fine by me - I'm not looking to break through the undecidability barrier, only to implement an algorithm that can prove theorems we already know are decidable (well, group functions into the always halts/doesn't always halt/undecidable categories).

The numerous branching and totally equivalent fields is the thing I'm wary of - I don't want to have to invest 6 months learning category theory, only to find out that what I want to do is better expressed by type theory or second order logic or something else entirely. I guess in my mind it's similar to building a complex application in one programming language just to discover certain patterns can't be idiomatically expressed in that language, so you have to write horrible hacks to compensate for that lacking where you could have chosen a different language in the first place. From my own research I'm leaning towards axiom systems, perhaps utilising Zermelo-Fraenkel set theory as a basis. I don't know, it's early days yet. I'll keep reading and get a rough grasp on the principal fields before I make a definitive decision.


> I'm not looking to break through the undecidability barrier

That, too, was far from clear. You said you were tackling this problem:

> what is the property of a program that makes it undecidable?

and I told you:

> That is itself an undecidable question!

and explained why (at least three times). And AFAICT you still don't get it because you still say you want to:

> implement an algorithm that can prove theorems we already know are decidable (well, group functions into the always halts/doesn't always halt/undecidable categories)

and that is the problem of automated theorem proving, which is an open research issue, and (provably) always will be.

If you want to get a small taste of what you're up against, try proving that this program always halts:

f(x:int, y:int) = ( x+y == y+x ? halt : loop )

In other words, try proving the commutative law of addition.

If you get that, try to prove that this lambda-calculus expression

    (λ (n f) (n (λ (c i) (i (c (λ (f x) (i f (f x))))))(λ x f) (λ x x)))
computes a factorial expressed as a Church numeral. (See http://www.flownet.com/ron/lambda-calculus.html for some hints.)

And that's just what you get from elementary arithmetic. If you really want to blow your mind, look at:

http://www.mrob.com/pub/math/largenum.html

and the associated software:

http://www.mrob.com/pub/perl/hypercalc.html

Once you get to infinity you are only just getting started:

https://johncarlosbaez.wordpress.com/2016/06/29/large-counta...

And even all that is just what you get from discrete math! Then there's calculus, topology, algebraic varieties (of which elliptic curves are one example, and even that is a whole field of study). And even here we've not even begun to scratch the surface of what is possible. The world of math is (provably!) richer than you -- or anyone -- can possibly imagine! That's what "undecidable" means.

Your only reasonable hope of making any kind of non-trivial progress is to focus on one domain, and then use the constraints imposed by the properties of that domain to shrink the problem down to something potentially manageable.


> you could clearly change "x is even" to any "x is wholly divisible by <integer>"

The tricky thing is that it's not clear, since it depends entirely upon the formal system you're using!

As a simple example, let's say I have the following Java program:

    if (2 < 3) { return "halt"; }
    else { while (true) {}; return "never halts"; }
This program will halt, but what if we change it? We "could clearly change" the `3` to a `1` and the resulting program would not halt. What if we changed the `3` to `"hello world"`? The result would neither halt nor not-halt, since it wouldn't even be a valid Java program in the first place, due to a type mismatch. Hence it may not at all be clear whether it's valid to swap out parts of a statement or not, let alone whether that will/won't change some property of the statement.

That Java example failed due to type checking, but static analysis is very similar to type checking/inference, in the sense that it's calculating properties of code (or logical statements; they're equivalent via Curry-Howard) based entirely on the syntax. In this sense, we can think of any static analysis algorithm as being a (potentially very complicated and confusing) type system.

It's easy to see from the syntax above that the `2` and `"hello world"` have different types, and hence that this isn't valid code. When we start getting more powerful or complicated type systems, like dependent types, Goedel numbering, or some static analysis algorithm, then it can be arbitrarily difficult to figure out whether a particular arrangement of symbols is actually a valid statement or not.

This might seem petty, but consider your `is even` example, written in some dependently typed programming language (i.e. a powerful static analyser). We might say (in Agda, Idris, etc.):

    halt : String
    halt = "hello world"

    f : Int -> String
    f x = if isEven x then f (x + 1) else halt
Most type systems are "conservative", meaning that they treat "dunno" as failure. Since most dependently typed languages aren't smart enough to figure out that this halts, we would have to rewrite the program in a way that provides evidence that it halts. This can get very messy very quickly (e.g. see https://github.com/agda/agda-stdlib/blob/master/src/Relation... ). So the resulting program will be a massive pile of symbols, all carefully orchestrated such that the type checker is happy with the result.

In that situation, it is certainly not clear whether we can just replace one expression with another, even if they're the same type (e.g. replacing one integer with another), since that might break some of the surrounding proofs.

If we think of a static analyser as a complicated type inference algorithm, which can sometimes infer such proofs and coercions, then we can think of your example as being the 'tip of the iceberg', which the static analyser can "elaborate" into a much more complicated form (this is exactly how Idris works http://docs.idris-lang.org/en/latest/reference/elaborator-re... and "tactics" in Coq are similar https://coq.inria.fr/refman/proof-engine/ltac.html ).

It is this explosion of symbols which makes it difficult to talk about formal systems by using informal statements (like your example). It can be very difficult to even figure out how to represent an informal statement in a formal way, but it's only then that we can ask specific questions about algorithms (e.g. what can or can't be deduced). Often, the answers to such questions turn out to be trivial properties of the formalisation; but it may take many decades to actually come up with that formalisation!


a dependent typing engine is exactly how I was thinking of representing this - i.e "x: int => x + 1" would have a type signature of "x: int => x + 1". Further compositions would stack. If I e.g. converted the Collatz conjecture to a function I would get the following signatures:

- collatz(x: 1) => halt

- collatz(x: int where x > 0 && x % 2 = 0) => collatz(x / 2)

- collatz(x: int where x > 0 && x % 2 = 1) => collatz(x * 3 + 1)

I figure that if I extract out recursion like this (as recursion seems to be the only thing that makes a computation potentially undecidable) and bubble the branching up to the signature as a form of pattern matching, then the signature becomes something you could potentially use to figure out if it is decidable. The issue I'm stuck on is what approach to take to build a deduction system for calculating whether continuing such a sequence from any value in the allowed input space would halt - obviously at this point in time the collatz conjecture is an open question but we can also construct functions using the same notation that we can intuitively determine will either always halt or only sometimes halt, so there is surely room for the existence of such a deductive system. I just don't want to spend a year learning about e.g. category theory or natural deductive logic just to find out that this problem cannot be effectively expressed in such a system.


A dependently typed system (or presumably anything else) which allows non-halting definitions is unsound. The classic example is an infinite loop:

  loop = loop
What is the type of `loop`? We can infer it by starting with a completely generic type variable, e.g. `forall t. t`:

  loop : forall t. t
  loop = loop
Then we can look at the type of the body to see which constraints it must satisfy, and perform unification with that and the `t` we have so far. In this case the body is `loop` which has type `forall t. t` (i.e. which is completely unconstrained). Unifying `forall t. t` with `forall t. t` gives (unsurprisingly) `forall t. t`. Hence that is the type of `loop`. Yet this claims to hold for all types `t`, which must include empty types like `Empty`, which should have no values!

    data Empty where
      -- This page intentionally left blank

    loop : forall t. t
    loop = loop

    myEmptyValue : Empty
    myEmptyValue = loop
In particular, this lets us "prove" (in an invalid way) things like the Collatz conjecture. Here's a quick definition of the Collatz sequence, in Agda/Idris notation (untested):

    -- Peano arithmetic
    data Natural where
      Zero : Natural
      Succ : Natural -> Natural

    one = Succ Zero

    halve : Natural -> Natural
    halve Zero            = Zero
    halve (Succ Zero)     = Succ Zero  -- Should never reach this; included for completeness
    halve (Succ (Succ n)) = Succ (halve n)

    threeTimes : Natural -> Natural
    threeTimes Zero     = Zero
    threeTimes (Succ n) = Succ (Succ (Succ (threeTimes n)))

    data Boolean where
      True  : Boolean
      False : Boolean

    not : Boolean -> Boolean
    not True  = False
    not False = True

    ifThenElseNat : Boolean -> Natural -> Natural -> Natural
    ifThenElseNat True  x y = x
    ifThenElseNat False x y = y

    isEven : Natural -> Boolean
    isEven Zero     = True
    isEven (Succ n) = not (isEven n)

    collatzStep : Natural -> Natural
    collatzStep Zero     = one  -- Again, only for completeness
    collatzStep (Succ n) = ifThenElseNat (isEven (Succ n))
                                         (halve (Succ n))
                                         (Succ (threeTimes (Succ n)))

    -- This won't be allowed by Agda/Idris since it may or may not halt ;)
    collatzLoop : Natual -> Natural
    collatzLoop Zero            = collatzLoop (collatzStep Zero)  -- For completeness
    collatzLoop (Succ Zero)     = Succ Zero  -- Halt
    collatzLoop (Succ (Succ n)) = collatzLoop (collatzStep (Succ (Succ n)))
We can then define the Collatz conjecture, using a standard encoding of equality:

    data Equal : Natural -> Natural -> Type where
      reflexivity : (n : Natural) -> Equal n n

    CollatzConjecture : Type
    CollatzConjecture = (n : Natural) -> Equal (collatzLoop n) one

    proofOfCollatzConjecture : CollatzConjecture
    proofOfCollatzConjecture = ?

    disproofOfCollatzConjecture : CollatzConjecture -> Empty
    disproofOfCollatzConjecture purportedProof = ?
The disproof basicallys says "if you give me a proof of `CollatzConjecture`, I can give you a value which doesn't exist"; since that's absurd, the only way it can hold is if there are no proofs of `CollatzConjecture` to give it (this is a form of proof by contradiction: give me a supposed proof, and I'll show you why it must be wrong).

The problem with allowing non-terminating recursion is that we can use `loop` to fill in either of these proofs, or even both of them!

    proofOfCollatzConjecture : CollatzConjecture
    proofOfCollatzConjecture = loop

    disproofOfCollatzConjecture : CollatzConjecture -> Empty
    disproofOfCollatzConjecture purportedProof = loop
The type of `loop` is `forall t. t`, which can unify with either of these types, so the language/logic will allow us to use it in these definitions (or anywhere else, for that matter).

For this reason, we have to make sure our language isn't Turing-complete, which we can do using a "totality checker" (a static analyser which checks if our definitions halt: if they definitely do, they're permitted; if they don't or we can't tell, they're forbidden). That stops us from writing things like `loop`, but unfortunately it stops us from writing `collatzLoop` as well.

One way to get around this is to use "corecursion". This lets an infinite loop pass the totality checker, as long as it's definitely producing output data as it goes (e.g. like a stream which is forbidden from getting "stuck"). We can use this to make a `Delayed` type, which is just a stream of dummy data which might or might not end (a polymorphic version of this is described in more detail at http://chriswarbo.net/blog/2014-12-04-Nat_like_types.html ):

    data Delayed : Type where
      Now   : Natural -> Delayed
      Later : Delayed -> Delayed

    collatzLoop : Natural -> Delayed
    collatzLoop Zero            = Later (collatzLoop (collatzStep Zero))  -- For completeness
    collatzLoop (Succ Zero)     = Now (Succ Zero)  -- Halt
    collatzLoop (Succ (Succ n)) = Later (collatzLoop (collatzStep (Succ (Succ n))))
This will pass the totality checker since each step is guaranteed to produce some data (either the `Now` symbol or the `Later` symbol); even though the contents of a `Later` value may be infinite! If we run this version of `collatzLoop` on, say, 6 (ignoring the `Zero`/`Succ` notation for brevity), we get:

    collatzLoop 6
    Later (collatzLoop 3)
    Later (Later (collatzLoop 10))
    Later (Later (Later (collatzLoop 5)))
    Later (Later (Later (Later (collatzLoop 16))))
    Later (Later (Later (Later (Later (collatzLoop 8)))))
    Later (Later (Later (Later (Later (Later (collatzLoop 4))))))
    Later (Later (Later (Later (Later (Later (Later (collatzLoop 2)))))))
    Later (Later (Later (Later (Later (Later (Later (Later (collatzLoop 1))))))))
    Later (Later (Later (Later (Later (Later (Later (Later (Now 1))))))))
We can rephrase the Collatz conjecture as saying that the return value of `collatzLoop` will always, eventually, end with `Now one`. This is an existence proof: there exists an `n : Natural` such that after `n` layers of `Later` wrappers there will be a `Now one` value. Since we're dealing with constructive logic, to prove this existence we must construct the number `n` (e.g. using some function, which I'll call `boundFinder`):

    unwrap : Natural -> Delayed -> Delayed
    wrap Zero     x         = x
    wrap (Succ n) (Now   y) = Now y
    wrap (Succ n) (Later y) = unwrap n y

    data DelayEqual : Delayed -> Delayed -> Type where
      delayedReflexivity : (d : Delayed) -> DelayEqual d d

    -- The notation `(x : y *** z)` is a dependent pair, where the first element has
    -- type `y` and the second element has type `z` which may refer to the first
    -- value as `x`
    CollatzConjecture : Type
    CollatzConjecture = (boundFinder : Natural -> Natural ***
                         (n : Natural) -> DelayEqual (unwrap (boundFinder n) (collatzLoop n)) (Now one))

    proofOfCollatzConjecture : CollatzConjecture
    proofOfCollatzConjecture = ?

    disproofOfCollatzConjecture : CollatzConjecture -> Empty
    disproofOfCollatzConjecture (boundFinder, purportedProof) = ?
With this definition, a value of type `CollatzConjecture` will be a proof of the Collatz conjecture. Since we can't use tricks like `loop`, we're forced to actually construct a value of the required type. This value will be a pair: the first element of the pair is a function of type `Natural -> Natural`, which we call `boundFinder`. The second element of the pair is also a function, but it has type:

    (n : Natural) -> DelayEqual (unwrap (boundFinder n) (collatzLoop n)) (Now one))
This says that given any `n : Natural`, we can return a proof that:

    DelayEqual (unwrap (boundFinder n) (collatzLoop n)) (Now one))
This says that `unwrap (boundFinder n) (collatzLoop n)`, i.e. removing `boundFinder n` layers of `Later` wrappers from `collatzLoop n`, is equal to the value `Now one`.

To disprove this form of the Collatz conjecture, we're given a supposed proof: i.e. we're given some `boundFinder` function and a value supposedly proving the equation described above. We need to find a contradiction to that supposed proof, which might be an `n : Natural` which reduces to `Now x` where `x` is not 1; or which we can show doesn't reduce at all. Either way this would contradict the claim made by `purportedProof`, and we could use that contradiction to prove anything ("ex falso quodlibet") including the `Empty` return value we need (just as if we had `loop`!).

The thing is, even if we set up all of this machinery, there is an unfortunate fact staring us in the face: compare the definition of `Natural` to the definition of `Delayed`. They're almost identical! The only difference is that `Now` takes an argument but `Zero` doesn't. If we think about what `Succ` is doing, it's just adding a wrapper around a `Natural` to represent "one more than" (e.g. `Succ Zero` is 1, `Succ (Succ Zero)` is 2, etc.). If we think about what `Later` is doing, it's just adding a wrapper around a `Delayed` to represent "one more step". In essence we've just traded one form of counting for another! It doesn't actually get us any closer to solving the Collatz conjecture, other than giving us something to plug a proof attempt into, such that it will be verified automatically. It's like setting up a Wordpress blog: it lets us say anything we like to the world, but doesn't help us figure what we want to say ;)


hey, just so you're aware I am working my way through your answer. I appreciate the depth of your comment and will be coming back to reply once I've fully understood your answer. I don't know how long it takes for HN to lock a thread for replies, but if you want to discuss this further would you mind dropping me an email (my address is in my profile)? It might take a couple more days for me to formulate a response.


That is quite possibly one of the best comments ever posted to HN!


hey Chris, I've read through your argument a few times and completely see your point - I've encountered this kind of second-system loop myself while trying to whiteboard a way to map a function to a representation that can be treated more like an algebra. Now my angle of approach is - I would be delighted for my system to not permit the collatzLoop function to be declared without constraining the range of natural numbers allowed!

From the sounds of it a totality checker may be what I'm actually wanting to write, rather than a type system - to be honest, I wanted to avoid explicit type notation within this language anyway, I'd much rather treat the functions included as a mapping where the output type inherently contains the relationship it shares with the inputs. I see two approaches I could proceed with: one fairly simple but not ideal, and one that is more optimal but that I do not know how to approach.

Scenario 1: What I'm imagining is a checker that, given a specific recursive function, can determine the exact input type that is required in order to terminate in 1 execution, 2 executions, and so on up until a defined limit. So for the collatz conjecture, it could determine that only the input `1` would halt on a single execution, `2` on two executions, etc but then would open up to either `32` or `5` after five executions. That seems simple enough to implement by converting branches into multiple function signatures and "solving for x" with substitution. However, I would love to be able to ensure that a function halts after an unrestricted number of recursions, and it seems as though it would be possible - because we can intuitively know that, for example, a function `f(n: Nat) => n is even ? f(n / 2) : f(n + 1)` will always halt on valid input.

Scenario 2: If we can intuitively know the fact noted above then surely that intuition could be encoded into a program? It seems like more of a meta concept in that I don't believe it would be possible to show such a fact using simple algorithmic substitutions (although I would welcome being wrong on this, makes my life easier!). From reading about the history of various mathematical fields over the past couple of weeks it seems like this pattern has shown up in maths on numerous occasions and appears to be basically a fundamental trait of logic - that any closed system that is permitted to contain references to the whole system suddenly gains a whole new set of behaviours that cannot be understood from the "meta-level" of the system itself. I mean, that's what GEB seems to be partially about too. A set containing sets that does not contain itself, the proof of the halting problem including an oracle that can call itself with itself as input, that kind of thing. And there's often a new system "on top of" the self-referential system that basically takes the self-reference out and provides a new meta-level for reasoning about the lower system, but with extra added self-reference. I guess what I'm trying to get to is - this scenario #2 seems like it must match up to some theory or another that is perhaps one of these meta-theories atop something like category theory or deductive logic - not looking at the relationships between input and output directly, but the relationship between the relationships, to determine whether it eventually resolves down. Something that could tell that the above-mentioned function halts, from base principles. I've found something that seems comparable in axioms and axiom schemata, but being able to build an axiomatic system seems pretty deep down the logic rabbit-hole and I don't want to dive in head-first without knowing I'm at least looking in the right direction, y'know?

Anyways, whether you respond to this comment or not, I want to thank you for your comment. You've described in a much more comprehensive way, something that I've been bumping into while researching this topic and that helps a great deal to verify to me that going down the lower meta-level route (damn you, lack of relevant terminology!) of trying to convert the logic into a different representation and solve that way is a fruitless exercise - though I still don't know if it is possible to move to a higher meta-level and make some progress from there; I guess that's why I'm asking :)


It does sound like you might be after something closer to totality checking. I'm not as familiar with that, but I can maybe give a few pointers and a bit of terminology:

One easy way to guarantee termination, even for functions with input/output spaces that are infinite, is using "structural recursion". This is when we only make recursive calls on a 'smaller piece' of our input. The classic example is the `Natural` type, where we can do:

    f Zero     = somethingNonRecursive
    f (Succ n) = somethingInvolving (f n)
We can tell just from the syntax that the argument will always get smaller and smaller for each recursive call. Note that structural recursion is syntactic: we cannot do something like `f n = f (n - 1)` since the syntactic form `n - 1` is not a 'smaller piece' of the syntactic form `n`; whereas `x` is a smaller piece of `Succ x`. This makes structural recursion pretty limited. As an example, the "Gallina" language of the Coq language/proof assistant does totality checking via structural recursion, and it very quickly becomes unusable (more on Coq below!).

There's actually a subtlety here: if `Natural` is defined inductively then all of its values are finite (it is the least fixed point of its constructors, sometimes written as a "μ" or "mu" operator; similar to the Y combinator). In particular its values are `Zero`, `Succ Zero`, `Succ (Succ Zero)`, and so on. This means that structural recursion on `Natural` will eventually stop, either by hitting `Zero` or some other "base case" (e.g. functions involved in the Collatz conjecture might stop at `Succ Zero`, etc.).

We could instead decide to use the greatest fixed point (sometimes written as a "ν" or "nu" operator); we call such definitions coinductive, and a coinductive `Natural` type would contain values `Zero`, `Succ Zero`, `Succ (Succ Zero)`, and so on and an infinite value `Succ (Succ (Succ ...))` (sometimes written as "ω" or "omega"; not to be confused with the non-terminating program omega `(λx. x x) (λx. x x)`!). Structural recursion on values defined coinductively might not terminate; but it does coterminate, which is the idea of being "productive" like I showed with `Delayed`. (Note that in my Collatz examples I was assuming that `Natural` was defined inductively and `Delayed` was defined coinductively). Note that all datatypes in Haskell are coinductive. Also note that a type may contain many different infinite values, for example if we define trees like:

    data Tree : Type where
      Leaf : Tree
      Node : Tree -> Tree -> Tree
If this is coinductive we can have any arrangement of finite and infinite branches we like. Another interesting example is `Stream t` which is like `List t` but only contains infinite values (there is no base case!):

    data Stream (t : Type) : Type where
      Cons : t -> Stream t -> Stream t
We can extend structural recursion a little to get "guarded recursion". In this case when we define a function like `f(arg1, arg2, ...)` which makes recursive calls like `f(newArg1, newArg2, ...)` we must also specify some function, say `s`, such that `s(newArg1, newArg2, ...) < s(arg1, arg2, ...)`. The function `s` effectively calculates a "size" for each set of arguments, and if we can prove that the "size" of the recursive arguments is always less than the "size" of the original arguments, then we guarantee termination/cotermination. This would allow things like `f n = f (n - 1)`, given a suitable "size" function (which could be the identity function in this case) and a proof (manual or inferred) that it always decreases. The Isabelle proof assistant does totality checking in this way, and I think it's also how Agda works (but Agda tries representing this using "sized types", which confuses me ;) ).

Talking about meta levels, proof assistants can provide an "escape hatch". The way Coq works is that proofs are represented in the "Gallina" language, which is total. We can write Gallina terms directly, but it can be painful. Instead, we can use a meta-language called "Ltac", which is a (rather awful) untyped Turing-complete imperative programming language. We can use Ltac to construct Gallina terms in whichever way we like; when we "compile" or check a Coq program/proof, that Ltac is executed and, crucially, it might not halt! We don't actually need such a large separation between object level and meta level: there's a really nice Coq plugin called "Mtac" which extends the Gallina language with a `Delay t` type (a polymorphic version of my `Delayed` example; we could say `Delayed = Delay Natural`), and the only "meta" operation it defines is converting a value of `Delay t` to a value of `t`, which it does by unwrapping any `Later` constructors at compile/check time. Just like Ltac, this might not halt (since there could be infinitely many `Later` wrappers).

Another thing to keep in mind with totality checking is "strictly positive types". Types are called "positive" or "negative" depending on whether they contain functions which return or accept values of that type. I've never looked into this in much depth (since languages like Coq will tell you when it's not allowed), but it seems like types which aren't "strictly positive" allow us to define infinite values, which shouldn't be allowed for inductive types.

Another thing to look at regarding self-reference is predicative vs impredicative definitions. Impredicative definitions are like "lifting yourself up by your bootstraps" (see e.g. http://wadler.blogspot.com/2015/12/impredicative-images.html ;) ).

I don't follow research in totality checking too closely, only where it overlaps my interests in programming language design and theorem proving. One memorable paper I've come across is https://dl.acm.org/citation.cfm?id=2034680 but whilst it's a nice application of language and representation, it's not clear to me if it's much of an advance in totality checking (i.e. it's a neater way of doing known things). I also see a steady trickle of work on things like "parameterised coinduction" and "circular coinduction" which seem related to totality, but they're slightly above my ability to tell whether they're brilliant new insights or just niche technical results. At least I hope they lead to better support for coinduction in Coq. You can tell how long someone's worked with this stuff based on how much they joke about the name (although I couldn't resist laughing when someone seriously remarked that "well of course everyone loves Coq" ;) ) versus how much they joke about "lol lack of subject reduction".


Thanks for coming back to me! Totality checking sounds exactly like what I was looking for; because your examples of structural and guarded recursion closely match parts of the algorithm I had already figured out through playing around with logical structures.

You've given me a lot to look into and I sincerely appreciate the effort you've put into opening up these fields to me. There's a few concepts you've mentioned that are still a little hazy to me (particularly coinductive types and the "Delayed" concept - which sounds somewhat like a generator function?) but now I know the terminology I can begin to research these topics myself in earnest. I always that find with the sciences, the hardest part of diving into a new field is simply knowing what you're looking for, followed shortly by building an accurate mental model of the concepts. You've helped a great deal with both, so thank you :)


Turing machines output yes/no or indefinitely go on, so that might be like your possible outputs of halts/doesn't halt/don't know. I'm not really sure how much to assume you know, so the [wiki page](https://en.wikipedia.org/wiki/Halting_problem) might help


> what is the property of a program that makes it undecidable?

It's important to keep in mind that undecidability and the halting problem only apply in general, not for some particular input or program.

With that said, I think it's to do with program length. The Busy Beaver numbers tell us the maximum number of steps a turing machine can take before halting, when given a program of a certain length. No matter how clever we try to make our static analyser, we ultimately have to write it down as a program (e.g. a turing machine tape, or something more sane ;) ) and this will have a particular, finite size. Hence there is an upper bound on the number of steps that any static analyser can take (the Busy Beaver number for its program length), unless it runs forever without giving us an answer.

Consider a particularly simple static analyser: it reads in a program, executes that program for N steps, and checks to see if it halted yet. If it halted, the static analyser halts with the answer "halts"; if it didn't halt, the static analyser halts with the answer "dunno". This static analyser can be improved by using a larger value of N; yet that necessarily requires a longer static analyser program (to contain the larger N value). That's what the Busy Beaver numbers tell us: calculating a number larger than BusyBeaver(X) requires a program more than X bits long; so we can't use a value of N that's larger than the Busy Beaver number of our static analyser's program length. If we want a bigger N, we will eventually need to make our program longer, since that's the only way to increase this bound caused by the Busy Beaver.

Note that there is another way we could "improve" such a static analyser: we could remove the cutoff completely, and just run the given program until it halts. In this case, the static analyser itself may not halt, so we're back to square one ;)

Now consider that there are infinitely many programs we might feed into a static analyser. These input programs can be arbitrarily long (much longer than our static analyser), and hence the limits imposed on them by the Busy Beaver numbers can be arbitrarily higher than the limits of the static analyser. In particular, their control flow can be arbitrarily complex, such that figuring out whether or not it halts can require an arbitrary amount of calculation steps. Since the number of steps any particular static analyser can perform is bounded by its Busy Beaver number, there will be infinitely many programs which that static analyser can't figure out; even though the same algorithm could figure them out, if given a larger limit to work with.

In this sense, we can think of all static analysis algorithms as being like Busy Beaver approximators: they're trying to calculate the largest number they can, in order to reach the number of steps required to analyse whatever program they've been given. They can do this in two ways: by bloating out their codebase to be much bigger than the programs they analyse, or by making their code more complex than the programs they analyse.

From a practical point of view, most human-written programs are incredibly simple and verbose; so these Busy Beaver limits aren't really a problem. Yet they're the reason that certain problems cannot be solved in the general case.

Some nice links:

https://www.scottaaronson.com/writings/bignumbers.html

https://en.wikipedia.org/wiki/Chaitin%27s_constant

https://en.wikipedia.org/wiki/Kolmogorov_complexity#Chaitin%...


Though interestingly the beginning of the discussion of 'Zen':

> This might seem unfair, so I’ll give a very specific but pervasive instance of this sort of meaningless flavor. Many parts of the book invoke Zen, which is a school of Buddhism that originated in China and has spread to several other Asian countries but, in the Western mind, is usually associated with Japan. (We do, after all, know this school by its Japanese name Zen and not by its Chinese name Chán, its Korean name Sean, or its Vietnamese name Thiền.)

seems to miss the fact that Buddhism originated in India and that not only the Japanese word Zen but also Chinese Chán are adaptations of the Sanskrit dhān(a) (meaning originally something like 'concentration' or 'contemplation').

(Obviously philosophy is not static, and surely there were Chinese developments in it, but by the same token there were also Japanese developments and to point out that Zen Buddhism did not originate in Japan and fail to mention the anterior roots seems rather odd to me.)


> seems to miss the fact that Buddhism originated in India and that not only the Japanese word Zen but also Chinese Chán are adaptations of the Sanskrit dhān(a) (meaning originally something like 'concentration' or 'contemplation').

Etymologically this is true, but in the sense of tradition, India doesn't really have a Zen tradition, it really did originate in China. Dhyāna is much more broad and not only part of Buddhism, but also Hinduism and Jainism


> India doesn't really have a Zen tradition,

India certainly has meditative/yogic practices and it seems unlikely to me that some bits of these wouldn't have travelled from India to China along with Buddhism and Sanskritic vocabulary.


Yes, of course India has a deep history of meditation and yoga and etymology does show relationship, but it is incorrect to say that it means the same thing in that place, most especially because the Sanskrit word itself wasn't purely preserved, it was mutated into a native sounding Chinese and Japanese word that took on a new, specific connotation reflecting the practice that evolved there in China.

A simple example of this is the meaning of "pudding" in British English vs American English.

> Pudding originally referred exclusively to varieties of sweet natural-casing sausage. The meaning grew to encompass anything boiled or steamed in a bag, then shifted to refer primarily to desserts, especially those cooked by boiling or steaming. Most recently, in American English, it has changed to refer exclusively to the thick custardy dessertstuff [1]

So this simple example illustrates that location can have a specific effect on the meaning and connotation of a word, and that a word may have a broad meaning from where it originates, but can have a very specific and different meaning elsewhere.

[1] https://english.stackexchange.com/a/12564


I do know how semantic change works, yes. And your pudding example shows that borrowing (and phonological adaptation) is likely beside the point. Words can change their meaning (and generally do) even when speakers are staying in the same geographic location (e.g. Sanskrit lokaḥ "world" became the Hindi plural marker log).

But all of this seems somewhat irrelevant to me for the topic at hand. Buddhism has a long history in India before reaching China, including meditative/concentration. Obviously there were significant developments in China (and Japan) which make Zen Buddhism distinct, but it seems odd to ignore the Indian Buddhist base it's built upon.


I'm not ignoring its foundation in India, only affirming that what we know of as Zen was critically developed in China and Japan. American English has roots in Britain, but it's distinctly American.


This is what happens decades after something that WAS status signalling at one point, no longer is status signalling.


Since the record player analogy was mentioned, I'll just quote the relevant entry from the Index of GEB, page 772. If the potential wonderment of the curiosities and untapped frontiers teased herein don't explode the mind of your 18-year-old self reading the book for the first time, there may be no hope for you.

record players: alien-rejecting, 487-88; Epsilon-Zero, 486; family of, in Crab's jukebox, 154-57; Grand Self-assembling, see record player Epsilon-Zero; as information-revealers, 158-61, 164; intrinsic vulnerability of, 75-86, 102, 424, 470, 486-86, 536, 543, 584, 721, see also Tödelization, TC-battles, etc.; likened to formal systems, 84, 85; low-fidelity, 77, 85, 101, 406-7, 470; Omega, 78, 468, 483-84; Numbers 1, 2 ... etc., 76-77; Tortoise-chomping, 483, 487-88; two-channel monaural, 634, 669; see also jukeboxes


The reviewer has interests which are superficially discussed in the book, and even those about which readers may often express interest. I believe the book would have been several times less interesting to me had I not looked into the philosophy of mind and the problems of consciousness and neurobiology. I'd enjoyed the math, the music, the art, the cultural references, and the surreal metaphors; but those weren't the full story—Hofstadter says that the book is not about those things outrightly, albeit towards the latter chapters, iirc. These references, though generally enjoyable, serve as a path towards expressing a strange proposition, however indirectly. Perhaps it was done more clearly in IAASL, but I've not read it. However, personally, the rhetorical flourish with which it was done here was enjoyable.


Yes, an irritatingly-written book, very hard to enjoy despite seemingly being perfectly designed for my interests. I feel the same way about Iain Sinclair's books, which I kept on buying until I eventually admitted defeat that they're just not for me, despite the amount of fascinating stuff within


I think there are at least two groups of people here, completely talking past each other. This happens a lot on HN. The two big groups seem to be (1) people talking about enjoying popular science writing, and (2) technical practitioners talking about what they do. The only thing we maybe forget as a community is that (1) and (2) are different genres that don't overlap very much.

One group is trying to read GEB as a popular science book and saying, wow, it's an unusually good one. The dialogs are a nice rest from the technical stuff and probably convey something.

Another group is trying to read GEB as an introductory technical book and saying it's a mixed bag. The dialogs are a distraction from the technical stuff and sometimes don't convey enough.


I share the same sentiment with many of the commenters. I found GEB to be unnecessarily verbose and repetitive. But that is not to say that I wasn't impressed by the book. Irrespective of your disposition, there is a lot to take from it. First time I previewed it, I was blown away by its scope and depth that I decided to buy it. But after few chapters, I found it difficult to get past the fluff writing. I will take the advise of skipping first 300 pages and re-attempt this.

Like one other commenter, I too recommend the Open Courseware on GEB. I have only watched the videos on the chapters that I have already completed. It offers a fresh perspective and academic take on the material presented in the book.


Did I just grow up in unintelligent circles? I never even heard of GEB until I was around 28 (I only heard of IAASL today thanks to this thread/article) and I'm only now just getting around to it. Did I miss the boat here now in my 30s?


Soooo, you didn't like a book that challenged you in a way that you did not like. I personally did enjoy it first time around.

It’s a venerated book among a certain stripe of computer scientist and mathematician

That is a bit trite and if I may: a bit disingenuous. I'm neither a comp sci nor a mathematician but I love GEB. It is a classic. I first discovered it by accident at school. It did not change my life as such but I think that DH did have a bearing on the way I think even nowadays.



It is not obligatory to like it. Few grand works are truly flawless. The ideas underneath are entertaining, and informing, and made me think (I got to read a pre-publication proof copy in the UK on the brink of University Computer Science degree)

Sure, it could have been shorter. I bet if you cut any bit, somebody out there would complain it was missing.


Like several others I read GEB during my undergrad days. I recall attending a local CogSci roundtable to announce my achievement, which I then viewed as something of a right of passage. When I told a prof I had just read GEB he laughed and said, "Oh no you haven't. You've just started."


Read GEB when I was 20. Absolutely loved it, and it had a big impact on my choice of studies and career. Since then I've had a lurking fear that I'd be grossly disappointed if I picked it up again now. Your review indicates I should probably let it be. Thanks!


There's also the fact that Hofstadter persistently equates strong AI with grandmaster-level chess-playing, and the entire chapter of nothing but totally unscientific, unfounded, incredibly wrong, biology-free speculation about how the human brain works.


> decided to explain its working by coming up with a metaphor involving record-players and records that are designed to physically break the record-players they’re played on

Wait... I've seen this style of explanation somewhere...

M O N A D T U T O R I A L S


As I've heard it explained, monads are simply monoids on the category of endofunctors, or something.

(I know what monoids are in group theory, and have a vague glimpse that functors are maps between categories. I figure that endofunctors are functors from a category to itself; how they come to be their own category or how monoids even apply, well.)


I agree that GEB weaves a nice braid in many ways, just not an eternal golden one.


This critique sounds like it comes from someone who has engaged with a wide range of topics enough to know that there is a lot of detail and nuance, but not enough for any of it to have coalesced into a simple understanding of it.

Cage's music is annoying and pretentious, and I largely think that because Cage was annoying and pretentious. He made a classic mistake in the art world of trying to tell people what his work was about and then getting pissed off when people either still didn't like it or chose to interpret it differently. That's a rookie mistake, and it's a fight artists never win. The smart ones keep their mouths shut and let the work speak for itself. The good ones make art that can be meaningful to a lot of different people in a lot of different ways. This was one of the great sticking points in the "War of the Romantics" between the New German School (Wagner and Liszt) on the side of program music against the Leipzig group (Schumann, Brahms) pushing absolute music. This battle between composers arguing about whether meaning should be explicit and imposed or if it should be intrinsic to the music and left to the listener was the context for Cage's work.

And his work is most charitably interpreted as social commentary, even though that's not what he wanted. The best thing you can say about it is that it asks the question, "What is music now?" But he thought he was the first person to ever ask this question and got really irritated when people pointed out that academics had been asking this question for a couple thousand years. So, a lot of very serious musicians thought he was pretty dumb, and he got pissed and said it's not dumb, you're dumb and you don't understand it, and if you were smart you would think it's all genius. It's really hard to defend that kind of a person, but it gets easier as time passes and you can stop thinking about the artist as a person and think of the art. But as a professional violinist for over 30 years and a music theorist, I don't object to the Hof's assessment. I could say similar things about Milton Babbit too, integrated serialism, and a lot of contemporary classical music. It would probably get a response from a certain kind of person who would say that I just don't get it or that I haven't given it proper study. Well, I have, and I think it's kind of dumb, it's been done before, and I no longer have the youthful zeal it takes to debate these things point-by-point.

Similarly, the criticism of the treatment of Zen also seems a little youthful. I grew up in a very Catholic family with parents who basically don't accept Vatican 2, and still practice Latin mass, and a large part of my home schooling was the intense study of Catholic history, philosophy, and theology. I took being Catholic extremely seriously for a very long time. But over the years I came to the conclusion that it was basically a bunch of cute fairy tale stories. Wonderful philosophy; terrible religion. And I'm not blind to the rich cultural heritage the Catholic church is responsible for. As a classical musician, I'm extremely aware of (and grateful for) how much the Church is responsible for Western Music as we know it today.

As I was drifting away from Catholicism, I got pretty deep into Buddhism. Ultimately I came to a similar conclusion. Great philosophy; silly religion.

Ultimately this article comes across almost as whiny and angsty and teenagery as John Cage does when he's defending his music: "You just don't understand me!!!"

No dude. Trust me. I understand it. It's just kind of stupid. You'll grow out of it eventually.

The book isn't perfect by any means. It is contrived. I don't think you could write a book like that today. But it was written in a very different time. A time when the knowledge divide between ivory tower academics and lay people was explicitly desired by the academics in a wide variety of fields, including--classical music. Babbit was famous for arguing that classical music should not be comprehensible to anyone who didn't have a PhD in Music Theory or Composition. That classical music deserved to be as respected and obtuse as advanced math and science, and that normal people didn't deserve to be able to understand great art. This book was a direct reaction to that academic pretense, and it presaged an era that has disseminated more knowledge across a wider range than any other explosion of knowledge since the printing press--and far outdone it.

Outside of that context, the book almost doesn't make any sense. Why would anyone write like this in a world where the top academic minds of every field lay out their innermost thoughts about the most challenging problems in their field on their blogs and talk to riff-raff in the comments and try to make things as understandable as possible? Well, no one writes like that any more. Not even Hofstadter writes like that any more. It is an artifact of its time . . . a lingering remnant of a world where knowledge had to be protecting by intellectual elites and a wide-ranging understanding of difficult concepts was seen as a threat to the academic establishment, and a reminder of how recently things worked that way.

When I was a kid violinist just getting into the completely new world of these concepts, it was illuminating and perhaps life-changing. As a grown up now with a different perspective, I can't see how it would have nearly as much of an impact on a younger person in today's context, and so in my world, its value has shifted from being the kind of book that initially illuminates a certain set of ideas and now reminds us of the world we came from way more recently than it feels like.


There's a pretty big clue on page 9 that the book is supposed to be read backwards and upside down, starting at the end of the last dialogue.

I know I found Hofstadters easter egg...


It's incredibly ironic that the author mistakes the dialogue of characters for the opinions of the author in the Tortoise and Achilles' remarks on 4:33.


Why is that ironic?


I don't love godel escher bach cause when i took my mathemat logic class the most annoying students were the ones who read that book!!!


A lot of people think that the goal of the book was to explain Gödel's theorem. Some even think the book shows that "mathematics, art and music are the same". In fact, as Hofstadter says in the preface (to the 20th Anniversary Edition I think), this is utter nonsense and he intended no such thing: his goal with the book was to build up a certain philosophical argument about what consciousness is (and his answer was something like self-reference, or transcending "levels").

I might be misquoting this, because, like most people, I found the way he builds up this argument more fun and memorable and interesting than the argument itself.

Do not read the book to understand Gödel's theorem, or Escher, or Bach. In fact, Hofstadter is not even really interested in Gödel's theorem itself: his interest is more in the particular trick that Gödel used to prove the theorem (Gödel numbering), which allows Gödel to encode statements about the system as arithmetic statements (within the system). And although the explanation in GEB is surely not the best explanations of Gödel's theorem, it is one of the most enjoyable musings on things "like" Gödel numbering — something you won't find in other places that are more interested in Gödel's theorem itself.

I think if one tries to read the book expecting to learn or understand something, one can get frustrated. But like other Pulitzer-Prize winners for nonfiction (say The Soul of a New Machine), any learning is only incidental; what one can enjoy is the way the book is constructed. It is a work of literature, not mathematics. The dialogues are my favourite parts of the book — sure they may not be realistic, but they take surprising turns, they often have a lot hidden in them (I remember reading one half way through and suddenly suspecting something, then noticing that they formed an acrostic, reading the first letters down the page). Other things I most loved about the book is the way it is self-referential, the way everything fits together, elaborate puns seem to be set up over a course of 300 pages, etc.

All in all, it is a book just simply of Hofstadter having fun.

(I had the same impression of Hofstadter's translation of Pushkin's Eugene Onegin — I've read at least three other translations; Hofstadter's may not be most literal but it's surely enjoyable and with the most outrageous rhymes: who else would rhyme "debts" with "Till his creditors cast their nyets"! You can read online an article he wrote [1] before starting his own translation, then a very critical review of his translation [2] but whose "bad" examples (and the whole first chapter linked there) show what's actually fun.)

You can see him enjoying himself, and you can go along if you'd like to have fun as well, forgiving a few things here and there, skipping parts that you don't find as much fun.

[1]: https://archive.nytimes.com/www.nytimes.com/books/97/07/20/r...

[2]: https://archive.nytimes.com/www.nytimes.com/books/99/09/12/r...


I probably asked that before but anybody knows about ebook of GEB?


As far as I know, there's no legal ebook of G.E.B.

I've this vague memory of hearing that the author regards it as a physical work of art; plus maybe the "ambigrams" were only licensed for the print edition, and the book won't be legally distributed without them.


It was an enjoyable read. I thought that it had it's moments.


Exactly how I feel about this book.


People read the dialogues?


I actually read just the dialogues to my son as bedtime stories when he was 5 or 6 - they actually work really well that way.


This is a perfect opportunity to whip out one of my favorite quotes by Simone de Beauvoir:

“Proust observed with astonishment that a great doctor or a great professor often shows himself, outside of his specialty, to be lacking in sensitivity, intelligence, and humanity. The reason for this is that having abdicated his freedom, he has nothing else left but his techniques. In domains where his techniques are not applicable, he either adheres to the most ordinary of values or fulfills himself as a flight.”

I think this quote applies to a lot of people on this site.


What does "fulfill himself as a flight" mean?

Is the lesson that since professors are "less free" than everyone else, they criticize everyone else out of bitterness? Or is it that since they're so specialized and able to make a living only in their specialty, they don't need to practise good judgement outside their discipline? Is it that they live in an ivory tower and don't know the difference between theory and practise? What does that quote mean?

...

Maybe they "abdicate their freedom" when they have to teach students and battle administration. I seriously don't know what that quote means.


From what I got from the essay (it's been a while...), people who chose a "serious" path in life see no value in things that do not contribute to their goal.

E.g.: If you focus too much on engineering, you stop being able to see value in other aspect of life, such as philosophy.

De Beauvoir describes a couple of states of freedom of mind: 1. Children who believe in what they're told and don't think about their freedom 2. The serious man who chose a goal in life and do not think about their freedom (which the quote is about) 3. Nihilists who realize they're free but chose to do nothing 4. Adventurers who accept their freedom, but chose no precise goals 5. Passionate men who choose a goal according to their freedom and show no concerns for others 6. Genuine freedom (freedom + goal + concerns for others freedom)


Thanks for that overview of her categories; it got me interested and searching a bit for a more detailed version. Now I think I'll give the original essay a shot, and I hardly ever read philosophy. Link to save anyone else 10 seconds of googling: http://faculty.webster.edu/corbetre/philosophy/existentialis...

(Also, this feels eerily similar to the first time I realized "Oh, going and reading the source of $LIBRARY is an option if the docs are too vague.")


I also see an effort-reward dynamic, similar to Dutch disease or resource curse.

The specialised activity provides high reward, the generalised little, or at least less.


What does "fulfills himself as a flight."?

We need to define flight here: "a brilliant, imaginative, or unrestrained exercise or display" [1]. So he fulfills himself as a a brilliant, imaginative, or unrestrained exercise or display. He toots his own horn, to promote oneself; to boast or brag; to tout oneself [2].

What does the quote mean?

They (some doctors and professors in this quote) have abdicated their freedom to their discipline or interest. They see all things through the single lens of their discipline, and all things outside of it as trite, "stupid", something not to think on at all or to think of only in how it relates to their specialty.

Take the result of Zuckerberg's "need" to monetizing Facebook. This has dire consequences on peoples security, and even peoples personal sanity. To Zuckerberg these things are dealt with in a way "lacking in sensitivity, intelligence, and humanity", all that matters is Facebook's bottom line. Further on in Simone de Beauvoir's Ethics of Ambiguity she quotes "The rest of the world is a faceless desert. Here again one sees how such a choice is immediately confirmed. If there is being only, for example, in the form of the Army, how could the military man wish for anything else than to multiply barracks and maneuvers?"

The overall idea being that people highly specialized and competent in one area make the logical fallacy of arguing from authority (themselves being the authority they are arguing from) and put forth the most mundane, lacking in nuance "ordinary values" in areas that they have little understanding, haven't researched and aren't really interested in.

1. https://www.merriam-webster.com/dictionary/flight definition 5

2. https://en.wiktionary.org/wiki/toot_one%27s_own_horn


Your Beauvoir quote and clarifications jibe with a statement by David Krakauer: intelligence is search.

In which case, why specific people are more intelligent than others, and whether that intelligencee is highly general (rare) or more specific (common, almost painfully or absurdly so at times), and why, is a fascinating question.

Beauvoir and Proust suggest that the mechanism is technique rather than a more general capacity, and hence focused on specific capabilities. That is, a method effective at crossing some specific divides, but not providing a deeper vision or reach, or higher vantage, generally. That last visualising a conceptual landscape as having peaks, valleys, and barriers (walls or chasms) to be crosssed.

Technique providing bridging capacity, but not view-from-a-peak.


a more obvious contemporary example might be elon musk.


Indeed, and to Peter Thiel


I also dislike Thiel, but this is gratuitously negative and irrelevant to the discussion.


The critiques here are spot on. The dialogues at the beginning of chapters were tedious and mostly unhelpful. The treatment of anything far afield from math was pretty shallow and cursory.

That said, I still loved this book in spite of its imperfections. I think it’s important to read it at the correct time in one’s life. When I first read it (at 19 or so), it showed me a lot of things I’d never thought about before. The topics it didn’t really do justice, it at least hinted at things that were new and interesting to me.

It doesn’t hold up quite so well to subsequent readings by my older self, but I’ll always have a soft spot for it.


It seems clear from his own writing that he lacks the intellectual capacity to fully appreciate the book. In his own words he describes repeatedly the struggles he faced with simple and obvious conventions of both writing and thought.


tl;dr. Excellent first few chapters (which deserve the high praise and possibly are the cause of all the hype), gets less and less interesting. At some point, I dreaded returning to it and gave up.


Did you sum up your experience reading the book? Or did you try to sum up the critique we're discussing here? If it's the latter you failed.


The book.

Apologies, you are right, should have been clearer. I think the other comments at the time were also about the book, not the article.


So now we have to consider the specter of someone writing an overly verbose review of something that reduces to "tl;dr". Someone might summarize the review by writing "tl;dr: tl;dr". Or they might express the same thing, but overly verbosely, so someone might summarize their response by writing "tl;dr: tl;dr: tl;dr". In general, if N "tl;dr"s is a valid sentiment, and any valid sentiment can be expressed overly verbosely, then N+1 "tl;dr"s are a valid sentiment, so by induction any positive integer repetition of "tl;dr"s is valid. However, there exists some number L such that L repetitions of "tl;dr" is itself too long to express the necessary point, and someone who didn't read it would want a summary, the most succinct of which would presumably be "tl;dr: tl;dr nested L times". This meta "tl;dr" itself might be expressed overly verbosely, of course...


Well done. Shame it's buried under N comments.


"tl;dr" here actually means "Too long; didn't read", as in, GP's result of reading the book, and not the "here is the summary" we usually tend to associate with the abbreviation.


thanks - that was helpful to know


The Tortoise. Not Tortoise. Author doesn't seem to have delved too deeply into this unique book.


I haven't read the book. Can you elaborate on why the definite article is an important aspect of how the book is written or interpreted?


It's not the definite article per se. It's just that The Tortoise is specifically The Tortoise all throught the book, just as is in Lewis Carroll. Not getting that, I don't really believe anyone could have been getting the spirit of the book at all, no matter how all my downvoters may feel about it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: