Hacker News new | past | comments | ask | show | jobs | submit login
The Paradox of the Proof (projectwordsworth.com)
270 points by ColinWright on May 10, 2013 | hide | past | favorite | 118 comments



This is a great article. The writer has obviously spent a lot of time speaking to Mochizuki’s colleagues, and has explained the whole strange situation in a way that is layman-friendly without being wrong. It’s interesting that it’s part of an experiment in donation-funded journalism: I was sufficiently impressed that I donated a few dollars, but I fear that is not likely to be a common enough response to be a viable way of funding this sort of time-consuming journalism. I would love to be wrong about that.

There are interesting parallels and contrasts with Thomas Hales’s proof of the Kepler conjecture. In that case, as far as I know Hales did everything possible to help his colleagues understand the proof, but even so it was so long and involved that the referees declared themselves unable to be certain it was correct. Since then, he has been working on a formal machine-verifiable version of the proof under the banner of the Flyspeck Project: http://code.google.com/p/flyspeck/


I wonder what would happen if, say, I allocated $10 per month to compensating articles and journalists I enjoyed.

And then it was distributed evenly by time I spent on the page, every single day. So if I spent 30 minutes reading this article, 10 minutes reading another, 2 minutes on 10 others, this author would get $0.16 ($0.33 per day, times (30 mins, out of 60 total mins of time = 0.5)).

That's really not that much. With 10,000 people all following this same path, the author would "only" get $1600. Readership is likely to drop off exponentially after the initial publicity. Publishing two articles of similar quality per month would get the author a nice living wage; not too bad, but only if they can reach an audience of 10,000 interested readers who are in on this compensation system. Slim chance.

Perhaps it works as a supplementary reward system. Another question: does this incentivize the right things? Will such metrics lead to longer articles which aren't necessarily interesting to read, but just take a long time? Does it disincentivize quality in any way?

More questions: how much would people be willing to contribute—$10 per month? $30 per month? How is it distributed—manually or automatically, by time or by rating? Is the small bit better than nothing? Does this incentivize people to a) produce good content, or b) pay for its consumption? Would people voluntarily buy into this system, or does it need to be a restrictive thing where only contributors can read articles?

Just thinking.


There is a micro-payment service called Flattr [1] which does something like this. It was founded by Peter Sunde, one of the pirate bay founders.

It's actually quite successfull in the German web community especially for podcasters. There are German podcasters which earn about 1000 Euro per month via Flattr.

[1] https://flattr.com


Someone actually mentioned Flattr to me today as a way to take donations for my own writings; I've always begged off because I've never wanted to take the time to set it up and clutter my site (more), but now here I see Flattr brought up again... Is there any way of even roughly estimating how much Flattr might bring in?


> Is there any way of even roughly estimating how much Flattr might bring in?

Of course: you try it, write up a massive future post about the results, and voila! But that probably wasn't the answer you wanted. I have no idea how many people are using it, but the idea sounds pretty great and if there's a good crowd to try it out on then HN readers wouldn't be a bad bet. I don't think anyone would mind another small button between PayPal and BTC.


> Of course: you try it, write up a massive future post about the results, and voila!

~-~

> I don't think anyone would mind another small button between PayPal and BTC.

Yes, fair enough... I think I may just be making excuses at this point to not try it out.


Just put it in my website footer [0]. Flattr is very flexible. You can use various methods with more or less or no JavaScript. Also you can link stuff like Github, so starring a repo leads to flattr compensation.

[0] http://beza1e1.tuxen.de/articles.html


Yeah, I just added it to http://www.gwern.net/ . At first I was all 'o.0 you expect me to load a bunch of JavaScript just to add a button' but then I spotted that there was a purely static version and added it without a problem. We'll see how it goes.


Cool, I was aware of it, I was just thinking about making it automatic. :) Easy extension to flattr though. Excellent, thanks.


The only problem I see with this is that value isn't always proportional to time. Sometimes I spend a long time reading an article because it is written in an inscrutable style and it takes a long time to see what the author is saying. Other authors write so well that I can follow their arguments as fast as I can read. All other things being equal, the later article is more valuable, but I spend more time on the former article.


Readability tried something like this. Among the problems: authors / publishers didn't sign up, and money collected had to be distributed or reimbursed. Unclaimed funds eventually were awarded to a charity.

Though I believe Readability had good intent, they took a great deal of flack for this.

Money weirds things.


Most of the weirdness in the case of Readability came from charging their users before they had established the relationships to pass the funds along. Collecting funds for authors sort of advertises that you are paying them, doing it while not being able to pay them is pretty shady.

I guess a better way to bootstrap such a thing would be as part of some sort of time tracker, show the user where they spent their time at the end of the month and wire it up to a tip jar system.


I don't know the situation, I wasn't there.

I can see this being the management team getting out in front of itself though. I can understand how others would criticize. I don't see any obvious signs of intentional malfeasance. Perhaps poor judgement.

The solution they proposed is one that I've been thinking should be applied to online content for some time.


This is exactly the sort of thing that I wish I could use something like bitcoin for. I groaned when I clicked "Pay Now" and saw PayPal. It doubles my time-investment in the article and significantly drags down my average enjoyment per minute.


I couldn't agree more - this is a great article on a complex topic. It's hands down the best article I've read in months and I also donated. It gives me hope that quality can thrive on the internet.


This reminds me of something I witnessed when I worked at Google. There was this long-standing problem, I don't want to go into details but it had to do with websearch indexing and it had gone unsolved for years though it was regularly affecting search results in a negative way.

Then one guy solved it. The changelist he prepared wasn't even that long but it was crazy complex. Helpful comments contained links to a 100-pages long academic paper on top trees. Since it had been long established on the team that the guy was (a) a genius and (b) hundred percent reliable, it was generally assumed that the solution would work, however all code changes at Google need to be reviewed and nobody was able to review that CL. A number of people tried but they all dropped out, even though, unlike the ABC Conjecture professor, the author was absolutely willing to answer any questions you had.

Eventually the powers that be decided to trash the CL, not because there was any doubt regarding its correctness, but on the principle that you can't let anything that only one person can comprehend enter the codebase.


As someone newly entering the workforce, this is a really depressing view into how bureaucracy can squash innovation.


Do not be depressed, at least because of this. Few people will ever write anything that is correct, can not be meaningfully simplified, yet incomprehensible and essentially unreviewable. The only other thing that comes to mind are the top-grade encryption algorithms. (They are reviewed, but even after extensive, extensive review by the very smartest people in the field, there's still an irreducible part when selecting an algorithm where everyone still just has to sort of hope there isn't some fatal flaw in there somewhere. Often, years and years later, there is.)


> Do not be depressed, at least because of this. Few people will ever write anything that is correct, can not be meaningfully simplified, yet incomprehensible and essentially unreviewable.

You don't have to write something that is essentially unreviewable to end up in the scenario described earlier. You just have to work with a team that isn't prepared to learn new things and rejects thing that they don't understand.

I've been studying functional programming lately and can easily imagine being told my code is "incomprehensible" because I wrote it in a functional style instead using loops and variables.


Wow, that's horrible.


Why? Look at it from an organizational perspective. If the code can only be understood by one person, then that person leaves, you have a piece of code that cannot be maintained or modified without potentially months of effort. Moreover, because it's a particular algorithm, this effort cannot be shortened or distributed across multiple developers. I would argue that whatever efficiency gain Google would get is totally outweighed by the risks of the developer leaving.


Just simply because the outcome was much less than the ideal outcome. Yes, accepting the reality that no one else could understand his output (assuming the output was correct), then that organizational outcome was the correct one. But it's a tragedy (again, assuming this guy was correct) that his knowledge couldn't be taken advantage of. I just think it would be really tough to feel the realization of being so powerful that you you are impotent.


Don't get me wrong, but from what I've seen of the world of science, the only people who exhibit this type of behaviour are frauds and delusional pseudoscientists.

All the warning signs of pseudoscience are there: "I couldn't possibly explain it" as a response to lecture invitations, working on something for extended periods of time without sharing results, using lots of obscure terminology which is not standard for the field, one long and obfuscated paper instead of building toward the result incrementally, etc. Using the title "Inter-universal geometer" instead of calling yourself a mathematician is also strange.

The natural way for a mathematician to behave after seemingly solving an important problem is exactly the opposite of what this guy is doing. And the only rational reason to do so is if there is actually something wrong about the work.

I lack the know-how to arrive at my own opinion by reading the paper, but this situation is definitely fishy. And it's not like a bona-fide scientist losing it and becoming an pseudo-scientist obsessed about a topic is completely unheard of.


> the only people who exhibit this type of behaviour are frauds and delusional pseudoscientists.

But those usually produce work that is very obviously nonsensical, and want attention rather than avoiding it. Mochizuki has earned the privilege of having his work evaluated on its own merits, and so far nobody has found any obvious flaws.

> And the only rational reason to do so is if there is actually something wrong about the work.

Who says the reason has to be rational? Maybe he just has developed a bad case of stage fright?

> And it's not like a bona-fide scientist losing it and becoming an pseudo-scientist obsessed about a topic is completely unheard of.

Can you give some examples?


> Can you give some examples?

Isaac Newton : http://en.wikipedia.org/wiki/Isaac_Newton%27s_occult_studies


Except he's from the pre-modern era and does not fit the definition of what we call now a "scientist". He probably did science (laws of motion, universal gravitation, etc) alongside alchemy, and occult studies.

It's rather a counter example : you can do occult studies, while in the same time providing the world some of the most revolutionary scientific concepts.


The counterpoint is that the guy has a serious track record before now (according the the article). If he didn't, no one would pay any attention to these claims. Since he does, you have to think that he might have proven it.


Well, the ‘warning signs’ are definitely there, but then it doesn’t appear entirely unlikely that he simply invested so much time to build up an entirely new field that it will be impossible to explain it in a few lectures. And given that Perelmann apparently still lives with his parents and refused this fancy price, I don’t know exactly what to expect from a successful mathematician.

So, yes, something is fishy, but so was the weird idea of curved spacetime.


Also making claims about typical mathematicians and attempting to apply them to those already determined to be atypical is a bit dodgy in and of itself. Perelman was not your typical mathematician and he behaved atypically as well. I'm not certain but I don't think he travelled and lectured on his proof of the Poincaré Conjecture. He even turned down a sizable award size. Just because an atypical scientist behaves atypical doesn't make their work any less valid.

I agree it seems odd but even in my limited exploration into Pure Mathematics I have seen alternative proofs made with ideas not native to the field. Why must that necessarily make this proof invalid?


I'm far from claiming that the proof is invalid based on my feelings on the subject.

My only issue is that in the article, and in other writing on the subject, nobody is even contemplating the possibility that Mochizuki's work might be unreadable for reasons other than it being too brilliant to grasp.

I wanted to present an alternative possibility which seems to be disregarded at the moment in favor of the attractive "eccentric genius" narrative.

On the topic of Perelman - he did reject the Fields Medal. However, he did give a series of talks at MIT, Princeton and other places a year after publishing his proof.


I must have read the article differently from you. I sensed there was a positive attitude towards him, but I didn't feel they were trying to say he had proven the theorem or was so brilliant no one else could grasp it. On the contrary, it seemed like they were saying "This person has been brilliant for quite some time, but this 'proof' seems entirely nonsensical even to the experts in the field."

So to me, that read more as, this is a curiosity that has a deep and interesting past and an even more interesting present.

> I wanted to present an alternative possibility which seems to be disregarded at the moment

That is entirely fair and valid. I just misread your comment as more along the lines of "Does no one else see how obvious it is that this guy is crazy?!" My fault.

> On the topic of Perelman

Yes I was already corrected. I couldn't remember with certainty whether he did or didn't. I thought he had, but then what I remembered about his personality made me reconsider that.


A lot of professional mathematicians are considering the idea that it might all be nonsense. Popular writing on anything technical is not always representative of what's actually going on behind the scenes.

At the same time, it's important to check the work rather than just dismissing it. Mochizuki's work isn't totally original - a lot of it is derived from existing theory. So it should be possible to validate it despite the complexity.

It's true that serious breakthroughs from individuals working alone are rare, but they are not unheard of. The real test is whether or not this work can be adapted to the rest of mathematics (or the other way around, perhaps).



Thanks. Didn't have the chance to check this morning.


If a programmer locked himself away for 14 years and then emerged and announced he'd written a completely bug free OS, there would be skepticism. Code needs to be battle tested by other people to find the bugs.

Mathematics is the same, to an extent; one guy working alone for 14 years is likely to have missed ideas and perspectives that could illuminate flaws in his reasoning. Maths bugs. If he's produced hundreds of pages of complex reasoning, on his own, however smart he is I'd say there's a high chance he's missed something.

Humans need to collaborate in areas of high complexity. With a single brain, there's too high a chance of bias hiding the problems.

(Repost of my previous comment https://news.ycombinator.com/item?id=4829806)


It's a shame that you've disregarded all of the replies to your original comment. tl;dr there are degrees of bugs and many are easily fixed.

edit: more constructively, Imagine that you're working on the NYT crossword and I come along and point out that 45 across is wrong and tell you what the answer should be. Do you then throw away the rest of your work? No, you fix the part that's wrong and then check the rest of the puzzle to figure out the scope of the error.


I don't really accept the notion that inconsistencies in a giant mathematical proof will always show themselves. That does happen sometimes, but if you're breaking new ground (as Mochizuki seems to be) its much more likely that things will seem consistent to you, but are actually inconsistent because you made a mistake somewhere.


Right, that's why other people are trying to verify the proof and aren't just taking it on faith. They're almost certainly going to find errors, the question is whether or not those errors are easily fixed, difficult to fix, or fundamentally impossible to fix.


Quote from original article

"...Mochizuki is holding a private seminar with Yamashita, and Kim hopes that Yamashita will then go on to share and explain the work."

The issues are complex. Mochizuki apparently has some diffidence about communicating with the wider mathematical community. Yamashita may have to act as spokesman (and an initial checker). Then once communicated the checking and sifting can begin (and the recycling of new tools start).

Assuming the work is intelligible and valid of course.


Isn't one guy working alone for 6 years how Fermat's Last Theorem got solved?

It did have a flaw that got fixed later in collaboration, but most of the work was one individual's deep focus.


Wiles was much more involved in the Math community at large than Mochizuki. He continued to publish non-FLT papers, go to conferences, etc. Even then, the general consensus is that FLT would have been proven much faster if Wiles shared his work earlier.


Something like this? http://www.templeos.org/


> For centuries, mathematicians have strived towards a single goal: to understand how the universe works, and describe it.

Um, no. Mathematics itself has absolutely nothing to do with "describing the universe". Mathematics is purely abstract. It has no inherent relationship whatsoever with reality.

That certain mathematical constructs can be used to model certain aspects of the real world is basically a lucky coincidence, but not very interesting to mathematicians - that's what physicists do.


I think there's a reasonable argument to be made that mathematical constructs are part of the universe.

David Deutsch has a method for ascertaining whether something can can be said to exist or not - ask whether it "kicks back" when you interact with it, in the sense that simulating the response of the thing you're considering in a totally convincing way would involve an effort as large as building a new universe for that thing to exist in.

Rocks "exist" because, if you wanted to build a totally convincing simulation of a rock, you'd need to include all of modern physics as we currently understand it.

Other minds "exist" because simulating them convincingly would basically require you to construct a true articificial intelligence (strong AI).

Mathematics "exists" because giving someone the genuine experience of doing mathematics when they really weren't would involve a simulation of almost unfathomable complexity.

There is a very real truth in the statement that '1 + 1 = 2', or the statement that there are arbitrarily long arithmetic progressions of prime numbers, or any number of other mathematical results. The world of mathematics is a very real part of the universe, that "kicks back" by constantly surprising us when we think about it.

So I don't think the article, which is very well written and researched, deserves the middlebrow "Um, no" scorn that you treated it to.


> I think there's a reasonable argument to be made that mathematical constructs are part of the universe.

Well, one could say that we "create" them using our minds, which certainly are, but that's deep into philosophical territory.

> Mathematics "exists" because giving someone the genuine experience of doing mathematics when they really weren't would involve a simulation of almost unfathomable complexity.

Sounds to me like a misapplication of the method - the complexity arises from simulating the response of a person to math, not from simulating math. By that standard, all fiction is "a very real part of the universe".

> So I don't think the article, which is very well written and researched, deserves the middlebrow "Um, no" scorn that you treated it to.

That concerned on one statement, not the entire article (which I found fascinating as well). Yes, I have to admit that this is rather smartassy, but I actially feel that, quite independant of this article, it is an important and amazing realization that few people make.


"By that standard, all fiction is 'a very real part of the universe'."

Strangely, I believe that statement supports crntaylor's argument, being the point of fiction as far as I can see.


> I think there's a reasonable argument to be made that mathematical constructs are part of the universe.

What if there are 0 universes? Does 0 exist without any universes?


"I think there's a reasonable argument to be made that mathematical constructs are part of the universe."

oh, sure, but everything is part of the universe. human thought, for instance, is part of the universe. and i'd say that's what mathematics is about.


You are wrong. It is reality what drives mathematicians to work, not a pure abstraction. It is because of reality that maths is interesting. It is because of Geo-metry that algebra is so relevant.

Not to speak of Calculus...


> It is reality what drives mathematicians to work, not a pure abstraction.

In general, no.

> It is because of reality that maths is interesting. It is because of Geo-metry that algebra is so relevant.

Perhaps to you, but not to most mathematicians, certainly not those working in academical settings. Have you ever looked at group theory or topology?

I don't think you've ever done the kind of math that mathematicians do. What you learn in school is not math, it's calculating. What mathematicians do is to invent constructs that have no basis in reality and prove statements about them, then come up with more constructs based on those statements, ad infinitum.

Sometimes those constructs may be designed to model real world problems, and getting funding is probably easier in those areas, but just as often the applicability is only discovered afterwards - or not at all.

The best example (because it's something we've actually all learned about) is complex numbers. They were first invented in teh 16th century and considered pointless and irrelevant at first. People soon discovered that they could be useful in proofs about non-complex numbers as well, but it took several centuries before they were found to be directly applicable in electrical engineering (many more applications have been discovered since).


> invent constructs that have no basis in reality

Regardless of how abstract you get, how far you go, math is always tied to our physical reality. The basic operations are reflections on properties of our universe. The "kind of stuff" mathematicians do" allows us to model and reason about our world in ways that wouldn't be possible any other way (that we know of).


>math is always tied to our physical reality

I'm sorry but even if this is objectively valid, this not a claim that you can support. There are entire sub-fields of maths in which there are no known physical attachments.

Now, that's not to say that some time in the future we won't discover the relationship between every mathematical concept and some physical system. But at this moment, the claim you are making has a numerable set of counter examples, with a very large order. For one example, take the Banach–Tarski paradox:

>Given a solid ball in 3‑dimensional space, there exists a decomposition of the ball into a finite number of non-overlapping pieces, which can then be put back together in a different way to yield two identical copies of the original ball.[1]

This is, as far as we know, physically nonsense.

That being said, there is quite a lively debate among mathematicians as to whether or not math is tied to reality.

[1] http://en.wikipedia.org/wiki/Banach%E2%80%93Tarski_paradox


> The basic operations are reflections on properties of our universe.

No, this is most definitely not true for all branches of mathematics.


He's not wrong.

To many mathematicians reality is largely irrelevant. For example G. H. Hardy, an extremely prominent British Mathematician, believed that true Mathematics is an art form and is not useful. He dismissed applications of mathematics as dull and boring.

And I can relate to him. It's incredible how complex, beautiful structures arise from a couple of simple axioms. It doesn't matter if what you study will be relevant or not, what matters is that it's fun and stimulating to explore.


A (somewhat tongue-in-cheek) question to ponder: If Hardy were alive today, would he turn his back on Number Theory as too useful, and study something more theoretical like the existence of long-time solutions to Navier-Stokes?


As someone with a degree in applied math, the pure abstract is more interesting than the applied. Applied math is like building really amazing and intricate sand castles on the beach. Pure math is like building the same sand castle, but in the sky and it's kept aloft purely by how beautiful it is, freed from constraints like "touches the ground" and "can support itself under gravity".

A lot of my friends feel the same way, with some of them specifically avoiding having "real world" applications of their work, as if that makes it an even better sand castle.

As to why I have an applied degree instead of doing pure math, numerical analysis makes a weird intuitive sense to me, and I figured building decent sand castles on the beach was better than making terrible sand castles in the sky that could barely hold themselves up. It also gets the grant money.


Look, I am an Algebraic Geometer and have done Schemes and whatnot. It is REALITY and this has nothing to do with "applied" or "pure". The fact that it is abstact has nothing to do with its being unreal.

Just to clarify: I am an expert too.


Pure math evangelist here. To be clear, it is only reality insofar as the axioms in which the theorems are derived are reality, and only insofar as our understanding and application logic is an objectively valid construct of reality.

When it all works out as beautifully as it does, in say Euler's Identity, it's hard to remember the possibility that the axioms could turn out false, or logic as we know it flawed.

But assuming (heh) that the axioms are true and that our understanding of logic is valid, "pure" math is as much apart of reality as "applied" math. And I'll choose proving the Fundamental theorem of Galois theory over number crunching in Matlab as my exercise in experiencing reality every time.


This is a philosophical position that almost no one engaged in serious math holds.

This book has an extremely modern discussion of this

http://www.phil.cam.ac.uk/teaching_staff/potter/staip.html

Only lay people and arm chair philosophers passing through math land have this unfortunate view.


The statement's a little hyperbolic, but I think you're misunderstanding the author's point. The point is that math is aimed at trying to build understanding in contrast to producing logically airtight proofs. The proofs are important to make sure that our understanding is right, but there's more to it than that.

I agree that's not the literal sentence you quoted, but it's definitely the theme of the article: according to many of the people cited, Mochizuki's shirking his responsibility by not explaining how to understand the result, regardless of whether the proof is correct or incorrect.

Caveat: I Am Not A Mathematician (IANAM)


> The point is that math is aimed at trying to build understanding in contrast to producing logically airtight proofs. The proofs are important to make sure that our understanding is right, but there's more to it than that.

Understanding is necessary to be certain the proof is correct. After that is ensured, many mathematicians will happily use the proof and base their work on it without understanding it.

> Mochizuki's shirking his responsibility by not explaining how to understand the result, regardless of whether the proof is correct or incorrect.

The main problem caused by his "shirking" is that it leaves the correctness of the proof uncertain and trying to fully understand it is a lot of work that could turn out to be wasted. People want to confirm the proof, and Mochizuki could make it much easier and less costly for them, but for some reason declines to do so.


From TFA: "Even an incorrect proof is better than no proof, because if the ideas are novel, they may still be useful for other problems, or inspire another mathematician to figure out the right answer."

edit: I can see taking issue with the word, "universe" in the original quote. I have trouble seeing how the word "understand" is debatable.


Here's an amusing counterargument:

http://pauli.uni-muenster.de/~munsteg/arnold.html


That's not an argument, that's an almost incoherent rant.


I actually read the whole article word-to-word. May be this personality of Shinichi Mochizuki appeals to me or may be i find maths more interesting than i admit. But i dont know which.

Also, i find it quite surprising that the proof for problems in domains as elementary as number theory, should have to be so complex, sort of baffles me. I hope i can rise up to the level to begin to understand this lingo or that someone brings it down to the level where i can find it interesting to read, like this article :D


> Also, i find it quite surprising that the proof for problems in domains as elementary as number theory, should have to be so complex, sort of baffles me.

One reason for this is that primes are defined by their multiplicative properties. There aren't many easy connections between primes and the additive properties of numbers, and so proofs that try to relate additive things to primes tend to involve deep and complicated things.


In case anyone wants to give it a try, here is the paper: http://www.kurims.kyoto-u.ac.jp/~motizuki/Invitation%20to%20... and it looks like he is about to give a lecture in Tokyo: http://www.kurims.kyoto-u.ac.jp/~motizuki/news-english.html

And here is a FAQ on the theory: http://www.kurims.kyoto-u.ac.jp/~motizuki/FAQ%20on%20Inter-U...


Wow! That was an amazing read! These old articles might be of some interest as well:

http://today.uconn.edu/blog/2012/10/the-mochizuki-theorem-wh...

http://mathbabe.org/2012/11/14/the-abc-conjecture-has-not-be...


It's a good tradition in math that you as the author have to convince others that your theorem is correct. If you cannot convince other fellow colleagues then you can't claim you have a proof.

That's why a lot of mathematicians are sceptical against computationally constructed proofs such as state space exploration. Or, at least, they don't like the taste of it :)


As a completely uneducated simpleton, it seems bizarre to me that addition and multiplication are considered "different" in the deeper explorations of math and number theory.

It seems like multiplication is just an extension of addition. How many times do you want to add numbers together? The result is multiplication. Similarly, addition can be used to represent multiplication. You want to multiply, which can be represented as adding things a certain number of times.

Of course, the conjecture introduces rules about prime numbers, and then says "ooo" now we see that prime numbers being added together (with rules of what kinds of prime numbers are allowed in the equation) results in "predictable" rates of incidence.

I guess it's a little late to go back and be born again with a life of education centered around number theory so I could see and comprehend the complexity!


This really deserves a longer and better answer, but I'm struggling to explain. It's a pretty deep conceptual thing, but I'll try.

    As a completely uneducated simpleton, it seems bizarre to me
    that addition and multiplication are considered "different"
    in the deeper explorations of math and number theory.

    It seems like multiplication is just an extension of addition.
You're not alone in this, but as you go on in advanced math you find more and more that multiplication is not really repeated addition, it just happens to coincide with repeated addition when that makes sense. The problem/opportunity is that multiplication still makes sense when repeated addition doesn't.

It might be easier to think of this with regard powers. People teach that A^5 is just AxAxAxAxA. You then deduce that A^a x A^b = A^(a+b). From that you start to assign meanings to things like A^0. And A^(-1). But what does it mean to multiply together -1 copies of a number? That doesn't make sense!

And what about A^{\pi} ? How can you have a transcendental number of things multiplied together? It doesn't make sense!

As you get deeper into math you need different definitions of powers, and of multiplication, and you find they they coincide with repeated multiplication and repeated addition, they may have originated with those ideas, but that's not really the best way to think about them, and it's not, in some sense, what they "are".

A poor analogy might be this. To an outsider, Smalltalk and Haskell will kind of look the same. They're programming languages, they do the same things. But they are really very different animals. So multiplication is really a very different animal from repeated addition.


It seems like multiplication is just an extension of addition. How many times do you want to add numbers together? The result is multiplication. Similarly, addition can be used to represent multiplication. You want to multiply, which can be represented as adding things a certain number of times.

If you're interested in this train of thought here's a resource you might want to chase up:

http://en.wikipedia.org/wiki/Hyperoperation

As for the "bizarre" interplay between addition and multiplication, you might want to contemplate The Other FLT [1]. TOFLT can be understood with just high-school math but is a big gateway drug into big-ass number theory. In more ways than one!

[1] http://en.wikipedia.org/wiki/Fermats_little_theorem


The key behind Math is to generalize from very specific rules like the concept of multiplication as n copies of addition to very general rules based on behavior. For example, in Ring Theory multiplication and addition are just functions that satisfy specific properties. By focusing on the class of all functions that satisfy these properties, we're able to prove very general theorems that have much wider applications than if we just focused on addition-based multiplication.

You see the same pattern in programming. Imagine if you had to have separate addition operators for doubles, floats, ints, and bigints. It's far more useful to have one function that only depends on the behavior of its input, rather than having its input be a very specific type of object.


> It seems like multiplication is just an extension of addition. [...]

> Similarly, addition can be used to represent multiplication.

Yet, addition cannot be represented as multiplication (AFAICT), so there you have a difference.


This situation reminds me of one of my favorite stories about Plato [1]. He gave a lecture in Athens on The Good, and lots of people showed up, but the whole talk was about mathematics! I hope some folks somewhere will take the time to understand Mochizuki's world. The part about Yamashita studying with him privately is encouraging.

[1] described in this paper: http://www.jstor.org/discover/10.2307/4182081?uid=3739856...


I think this points out the necessity to develop better proof assistant systems [1], in particular for automated proof checking [2]. However, I have never interacted with such systems and thus don't know whether it will be possible to just feed Mochizuki's formidable constructions into it.

[1] http://en.wikipedia.org/wiki/Proof_assistant

[2] http://en.wikipedia.org/wiki/Automated_proof_checking


I would not expect so. Mechanized proof systems tend to require a lot more detail than one would put into a proof meant for humans to read. There's been a lot of work in automating part of the generation of a proof, but that still requires a human to look at what the automation came up with and intervene to guide it in the right direction.


Don't forget that you would need to put not just Mochizuki's work into a proof checker, but also everything that it depends on. From what I understand, this is far too much work to be feasible with currently available proof checkers and changing that would require more than incremental improvements.


Mochizuki has this 1 page pdf posted on his website that tries to explain Inter-universal Teichmuller Theory through analogy to a Japanese animation: http://www.kurims.kyoto-u.ac.jp/~motizuki/sokkuri-hausu-link...

I wish I could find the animation itself, but that link is broken.


Song from the album: http://youtu.be/feOLbipVGEU

If you find a copy of his explanation in Japanese I'll try to see if he's referring to a literal animation or not. I couldn't find one and he doesn't really refer to anything beyond the theme/concept and the girl's theta-like eyes.

Edit: Something else I found (https://www.jstage.jst.go.jp/article/essfr/6/3/6_160/_pdf).


Is that article written by Mochizuki? Can you make sense of it?


It is not written by Mochizuki (望月新一), but rather by Shiraki Yoshinao (白木善尚). You can see his profile here: http://www.sci.toho-u.ac.jp/is/lab/shiraki_lab/shiraki.html

I didn't really get much from it even with some assistance, but the very first part is a quick dialogue about a universe king and a neighbor universe king exchanging New Years gifts. Section 7 is about the そっくりハウス and goes with the page you linked, but ends by saying something about the multiplicative and additive rotational properties have something to do with the different universes seeing each other. He compares the girl's excitement to Mochizuki's upon discovering the identical house inside of a house.

Someone with better command could give a much better summary, but I'd personally be more interested in the lecture Mochizuki gave himself on it IUTeich last week (according to his site).


(ok, rant ahead)

I think mathematicians have a weird way of thinking about problems.

First-order logic for example: http://en.wikipedia.org/wiki/First-order_logic

It's quirky to think, for example, on the natural numbers that 'exists an operation + and a null element under that operation 0'

This is "very understandable" by humans, but very difficult to compute.

As such as this conjecture is stated in a way that looks qualitative (but has a good definition), still, usually it seems that proofs are even harder for theorems defined like that

http://en.wikipedia.org/wiki/Abc_conjecture


If you want general results, you need to abstract away from standard preschool algebra and build up models that then let you get said general results. That might appear ‘weird’, but I don’t see how else you could get even to calculus, not to mention, for example, the algebra driving a sensible description of quantum mechanics or differential geometry.

You called it a rant, but could you maybe still make a suggestion on how to better think about problems?


" you need to abstract away from standard preschool algebra and build up models that then let you get said general results"

Yes, of course.

" but could you maybe still make a suggestion on how to better think about problems?"

And that's what I meant. Thinking about problems in a different way (but still provable, and still working in a similar way)

For example, for Peano arithmetic you have that equality is symmetric (and transitive)

Now, there are several ways to explain that, and it's usually explained more or less by "for all X and all Y, if X = Y then Y = X"

Now, it would maybe be interesting to have a 'different explanation' that is as powerful as first order logic but works differently (and maybe easier to compute)

For example, it may be possible to write Peano arithmetic as a grammar (so zero would be ' ', one would be I, two would be II, etc)


"For example, it may be possible to write Peano arithmetic as a grammar (so zero would be ' ', one would be I, two would be II, etc)"

I seem to recall doing this as a homework assignment in computer science, writing some unrestricted grammars that could be used to perform certain simple operations on some simply-encoded numbers.

See also http://en.wikipedia.org/wiki/Thue_%28programming_language%29

But I wonder if you've gotten very far into mathematics. It's actually well known that there are many things equivalent to first-order grammar and it's completely common to choose different things based on the sort of thing you're doing, just as many things are equivalent to Turing Machines and one can freely choose the most convenient one for your local problem. Peano arithmetic is chosen merely for its convenience for the simplest of proofs, and the instant it becomes inconvenient a real mathematician abandons it for something more locally useful.


And after you spent ten years reformulating basic maths in your fancy new logic, people will look at your papers and won’t understand a word, which appears to be more or less what happened to our poor protagonist in the OP.

Furthermore, I have to admit I don’t see the immediate advantage such a reconstruction would bring with it.


Well, the OPs reinvention looks like something more high level

Well, there may not be immediate advantages, but in math you never know. There are several hard problems in one domain that are trivial in another domain, for example.

An example from physics: http://en.wikipedia.org/wiki/Hamiltonian_mechanics


> There are several hard problems in one domain that are trivial in another domain, for example.

Certainly, and this is pretty much what OP did, invent a new domain to solve a problem – Hamilton aka Lord Kelvin merely reformulated the problem slightly, and while I personally love Hamiltonian mechanics, I don’t think it is comparable to ‘inter-universal geometry’ or replacing first order logic with something else.

So, yes, a different field may provide a different perspective and hence easier solution, but if you want to replace first order logic, you’re not looking at a different/new field in maths, you’re looking at rebuilding maths.


Your suggestion reminds me of typographical number theory (TNT), in Hofstadter's Goedel Escher Bach. TNT is introduced for didactic purposes, I don't see the point of using it for actual mathematics.


That's what I thought when I read his comment. Hofstadter emphasizes how unmanageable such a system becomes quite quickly.

For example, here's what a proof that addition is commutative looks like in TNT: http://imgur.com/a/O63Ij


Yes, I was thinking of it when I wrote the comment, there's also an arithmetic with P an M symbols - for plus and minus (but I can't find it on google, it's been a while, sorry)

Thanks for sharing this snipped, one of the several fun things in GEB


No problem, the image at the beginning of chapter 10 is one of my favorite GEB gems: http://i.imgur.com/NahfzTp.jpg


First-order Logic is a fascinatingly deep and interesting topic. The level of abstraction is a requirement - it makes it far easier to work with, prove, and understand.

Source: 4th year mathematics student.


The first section of this article reminds of the film Proof.

http://www.imdb.com/title/tt0377107/


“The point is not to prove the theorem,” explains Ellenberg. “The point is to understand how the universe works and what the hell is going on.”

School failed at conveying this to me in so many domains.


The point I often finish with when I give talks is this:

    People tell you that the point of being good at science
    and math is to help you understand the universe, and to
    understand what's going on.

    This is only the first step.

    The real reason for being good at math and science is to
    BEND THE WORLD TO YOUR WILL !!
The kids seem to like that ...


  "For centuries, mathematicians have strived towards a
  single goal: to understand how the universe works, and
  describe it."
I cannot resist engaging that loaded premise. It is a thought that I've wanted - for some time - for someone to skilfully dissect and lay bare for easy comprehension.

The closest I've come to seeing it, is the following - a concise and accessible yet well-rounded explanation of the relevance (or lack thereof) of mathematics to the fabric of our reality.

Alex Knapp, a science writer at Forbes :

  In the midst of a rather interesting discussion of the
  notion of Aristotle’s Unmoved Mover, Leah Libresco went on
  a mild digression about the philosophy of mathematics that
  I couldn’t let go of, and feel compelled to respond to.
  
  She says:
   
    I take what is apparently a very Platonist position on  
    math.  I don’t treat it as the relationships that humans
    make between concepts we abstract from day to day life.  
    I don’t think I get the concept of ‘two-ness’ from
    seeing two apples, and then two people, and then two
    houses and abstracting away from the objects to see what
    they have in common.

    I think of mathematical truths existing prior to human 
    cognition and abstraction.  The relationship goes the
    other way.  The apples and the people and the houses are
    all similar insofar as they share in the form of two-
    ness, which exists independently of material things to
    exist in pairs or human minds to think about them.

  The notion that there’s something special about math –
  that it has some sort of metaphysical significance – only
  makes sense if you ignore the history of how we uncovered
  math to begin with. It was, despite Leah’s protestations,
  exactly just the abstraction of pairs and triplets and  
  quartets, etc. The earliest known mathematics appear to be
  attempts to quantify time and make calendars, with other
  early efforts directed towards accounting, astronomy, and
  engineering.

  Mathematics is nothing more and nothing less a tool that’s 
  useful for humans in solving particular problems. Math can
  be used to describe reality or construct useful fictions.
  For example, we know now that the spacetime we live in is
  non-Euclidian. But that doesn’t make Euclidian geometry
  useless for everyday life. Quite the contrary – it’s used
  every day. You can use mathematics to build models of
  reality that may not actually have any bearing on what’s
  real. For example, the complicated math used to describe
  how the planets moved in the Ptolemaic model of the solar
  system – where everything orbited in circles around the
  Earth – actually produced very accurate predictions. But
  it was also wrong. There aren’t actually trillions of
  physical dollars circulating in the economy – there are
  just symbols for them floating around.

  The bottom line is that human beings have brains capable
  of counting to high numbers and manipulating them, so we
  use mathematics as a useful tool to describe the world
  around us. But numbers and math themselves are no more
  real than the color blue – ‘blue’ is just what we tag a
  certain wavelength of light because of the way we perceive
  that wavelength. An alien intelligence that is blind has
  no use for the color blue. It might learn about light and
  the wavelengths of light and translate those concepts
  completely differently than we do.

  In the same way, since the only truly good mathematicians
  among the animals are ourselves, we assume that if we
  encounter other systems of intelligence that they’ll have
  the same concepts of math was we do. But there’s no
  evidence to base that assumption on. For all we know,
  there are much easier ways to describe physics than
  through complicated systems of equations, but our minds
  may not be capable of symbolically interpreting the world
  in a way that allows us to use those tools, any more than
  we’re capable of a tool that requires the use of a
  prehensile tail.

  Math is a useful descriptor of both real and fictional
  concepts. It’s very fun to play around with and its
  essential for understanding a lot of subjects. But it’s
  just a tool. Not a set of mystical entities.
This explanation is very satisfying yet disillusioning at the same time.

In short, his explanation allows for some (or a very large number of) mathematical truths (according to the consensus of mathematicians and what appeals to their logic) to be just that -- figments of numerical imagination that neatly sit in the confines of our logic.

Nothing necessitates all mathematical truths (much less conjectures) to be corresponding to some aspect (however minute or however large) of our reality.

Some truths might (purely out of happenstance), but nothing mandates that all math truths correspond to some facet of our physical reality.

So, some (or a lot of) math is just hocus pocus.

Non-mathematicians will never know owing to the very nature of peer-review and the consensus-building aspect of modern research and scholarship.

A few questions:

What other conclusions can be drawn if one were to find this explanation appealing?

Are there other explanations of the relationship between math and our reality, that you've found appealing?

Is there a consensus among mathematicians as to what higher-order math, essentially is in pursuit of or should be in pursuit of?

Is it an exercise of random shooting of darts hoping that some "mathematical truth" sticks and corresponds to some observable phenomenon?

Source:

Does Math Really Exist?

http://www.forbes.com/sites/alexknapp/2012/01/21/does-math-r...

Edit: Clean-up and rewrite.


It's cool that "Alex Knapp, a science writer at Forbes" has solved one of the major issues in the philosophy of mathematics by, erm, just asserting that Platonism is false. Of course, maths is a tool, but that doesn't mean that it is just a tool, and Knapp doesn't provide any arguments for that conclusion. I can use a rock as a tool, but that doesn't mean the rock has no existence independent of my mind. In fact, the only reason the rock works as a tool is because it has an independent existence (I can hit things with it); the same might be true of maths.

Now, there are arguments for positions like Knapp's, but they're more complicated and contested than Knapp's facile post suggests. See, for instance, this Stanford Encyclopedia of Philosophy article: http://plato.stanford.edu/entries/fictionalism-mathematics/


Seriously. At least Leah Libresco had the intellectual clarity to declare herself a Platonist, as in, literally, a believer in or adherent to some part of Plato's philosophy.

Knapp latches onto a lack of empirical evidence for a Platonist viewpoint (though narrowly defines evidence as apparently needing to be extra-human to be valid, which isn't a well founded definition, but, then again, definitions are the fundamental problem here), and so he concludes that it must therefore be false, completely missing his own point.

Sorry. While there is a wealth of interesting discussion to be had on philosophy of mathematics, and even more discussion (and certainly more concrete discussion) to be had on the difference between how mathematics is actually created and how it is then defined in proofs and literature, this passage misses the boat.

Also, spitx: while fair use covers quoting from and discussing articles quite well, quoting the full article, even with discussion, could be problematic and certainly should be considered poor form. Take some quotes and then link to the rest.


I find what's written on that page much more naive than Knapp's view. Knapp is not advocating fictionalism, i.e. that math has no meaning. He is advocating the opposite: math has meaning, but the reason is not some metaphysical mystery, it has meaning because it was constructed for that purpose. Knapp's view is much closer to what that page describes as physicalism, with the caveat that humans are not perfect so our math is not necessarily the same as alien math.

That is also one of the instances where that page is so naive. It dismisses physicalism in a just a couple of words on the grounds that physicalism cannot explain infinities, especially very large infinities. But mathematicians themselves are divided on whether those are "real" or just an artifact of the axioms (see constructivism and more radically finitism). Another view that fits perfectly within physicalism is that infinities are just an approximation, a reasoning tool to simplify things.


> It's cool that "Alex Knapp, a science writer at Forbes" has solved one of the major issues in the philosophy of mathematics by, erm, just asserting that Platonism is false.

And what is wrong with saying that? The burden of proof is on Platonists, just like it is on theists. I don't see any reason not to think that Platonism has been a huge waste of time.


I feel like mathematics applies too closely to physics for it to be just a tossing of darts. With a bit of familiarity complex numbers are the "obviously best" numbers, the simplest (sort of) and most mathematically interesting; that they are so central to quantum mechanics is too much for coincidence. Then there's the fact that mathematical truths really are true in models to which their axioms apply (as far as we can measure), unlike any other rule of inference, so the association between the symbols we manipulate is not arbitrary.

Many mathematical areas initially seem to be blind alleys - Hardy rejoiced in the idea that his work on primes would never be a weapon of war. But there's an uncanny tendency for connections to emerge, and theorems in one field imply something new in a different one. Again, it seems too much for mere coincidence.

Moving from what's widely supported to my own views, my suspicion is that reality itself is a mathematical phenomenon (at one time I was expecting to work on the Causal Sets hypothesis, which attempts to explain the universe as arising from certain partially ordered sets).


> Hardy rejoiced in the idea that his work on primes would never be a weapon of war

What about RSA encryption, and the former US export controls thereon?

Also, what about more exotic mathematical objects, like finite fields for example, that have a lot of structural properties but don't seem to model much of anything in the real world, but are still useful theoretically?


>What about RSA encryption, and the former US export controls thereon?

That's exactly the point.

>Also, what about more exotic mathematical objects, like finite fields for example, that have a lot of structural properties but don't seem to model much of anything in the real world, but are still useful theoretically?

If that theoretical use applies to physics then to my mind the theoretical objects are as real as anything else. I have a friend who likes to say that electrons don't "really" exist (because they can't be directly observed), they're just a theoretical object that makes our calculations easier. I guess if you pressed me on philosophy I'd say that the only "real" things are sensory observations - we posit objects such as "that table" because they make the explanation of our visual perceptions easier. If positing a finite field makes some part of physics simpler, that seems to be the same thing.

Now for areas of mathematics that are isolated from the rest it's much easier to say they don't represent anything real. But one of the things I was trying to say in the grandparent post is that there's an uncanny tendency for these areas to not stay isolated for long.


The book that you want to read is The Mathematical Experience. Trust me on this. There is no book that I know which is as good at conveying the feeling or breadth of doing mathematics, with real examples, at a level that is both largely accessible to high school students, and enlightening for math professors.


Thanks for the suggestion. Sounds like an interesting read.

Here's a review of the "Study Edition" of this book:

   "How much mathematics can there be? they are asked. 
   Instructors in a Mathematical Experience course must be
   prepared to respond to questions from students concerning
   the fundamental nature of the whole mathematical
   enterprise. Stimulated by their reading of the text,
   students will ask about the underlying logical and
   philosophical issues, the role of mathematical methods
   and their origins, the substance stance of contemporary
   mathematical advances, the meaning of rigor and proof in
   mathematics, the role of computational mathematics, and
   issues of teaching and learning. How real is the conflict
   between “pure” mathematics, as represented by G. H.
   Hardy’s statements, and “applied” mathematics? they may
   ask. Are there other kinds of mathematics, neither pure
   nor applied? This edition of the book provides a source
   of problems, collateral readings, references, essay and
   project assignments, and discussion guides for the
   course."

   "The authors state, “Most writers on the subject seem to
    agree that the typical working mathematician is a Platonist
    on weekdays and a formalist on Sundays.” The substance 
    of the mathematics appears to change with experience and
    depends on the person recounting the story. But it has an 
    objective reality that is independent of the person. Alas, 
    when precision is required, it is common to retreat to the
    formalist position that mathematics is only a created 
    structure of axioms, definitions, and their consequences."
Source:

http://www.ams.org/notices/199710/comm-millett.pdf ( PDF )

http://www.springer.com/birkhauser/mathematics/book/978-0-81...


There was great discussion of this in Anathem by Neal Stephenson. It's sci-fi book, has story etc, but I mostly liked it because of such discussions between characters presented with invented terminology (so you don't skip over them out of familiarity).


To be honest, I read "Inter-universal Geometer", and I immediately wondered if it was an Anathem reference.


> What other conclusions can be drawn if one were to find this explanation appealing?

That even if we find a Grand Unified Theory of Everything that fully explains the physical world, we'll never run out of things to discover.

> Are there other explanations of the relationship between math and our reality, that you've found appealing?

None that makes sense, given what I know.

> Is there a consensus among mathematicians as to what higher-order math, essentially is in pursuit of or should be in pursuit of?

The potential for more math. Mathematicians look for theorems and constructs that are "interesting", which basically means that they are neither simple nor random. Prime numbers would be uninteresting if they were regular, but also if they were random. In fact they are neither, there's a multitude of patterns - the Wikipedia category "Classes of prime numbers" has 69 pages in it!

> Is it an exercise of random shooting of darts hoping that some "mathematical truth" sticks and corresponds to some observable phenomenon?

Yes, but some people don't care where the darts go, while others care about observable phenomena and notice any darts lying near them and start asking around who shot those and what they were doing...


>That even if we find a Grand Unified Theory of Everything that fully explains the physical world, we'll never run out of things to discover.

That's odd. In my experience, these detours in these discussions come as a result of how vague this whole terrain is. There is not even a smidgen of agreement on even the basic of agreed things.

Can't mathematicians and theorists not agree on what areas of math are most helpful and what areas least helpful in arriving at a Unified Theory?

This leads up to the other question of consensus among mathematicians.

>The potential for more math. Mathematicians look for theorems and constructs that are "interesting", which basically means that they are neither simple nor random.

Isn't there a faction of math people who strive towards a defined, non-abstract direction as opposed to fostering a laissez-faire approach to mathematics scholarship that naturally creates more math so that their area of expertise gets recognition and not to mention substantial purses of money?

Is math so poorly funded that all mathematicians lead a hand to mouth existence and therefore collectively as some sort of cabal, have to resort to these self-preservation tactics?

Come on. Really?


> Can't mathematicians and theorists not agree on what areas of math are most helpful and what areas least helpful in arriving at a Unified Theory?

Not fully, and with good reason (see below).

But actually, my point was that the existence of mathematical constructs that do not correspond to any physical reality means that there will be math to do when (and if) all physics has been done.

> Isn't there a faction of math people who strive towards a defined, non-abstract direction as opposed to fostering a laissez-faire approach to mathematics scholarship that naturally creates more math so that their area of expertise gets recognition and not to mention substantial purses of money?

Yes and no. There is the branch of applied math, and I'm sure they get funding more easily.

But there are also mathematicians (cited several times in the comments here) who see math as art and want to do it for its own sake.

And it has happened quite often that these "pure" mathematicians came up with enirely new stuff that only afterwards (and without anyone foreseeing it) turned out to be useful in modelling physical processes. Even among mathematicians you sometimes find that you can prove something in one field by using constructs and theorems from an entirely different field that nobody thought was in any way related. I believe Wiles' proof of Fermat's Last Theorem was like that.


"It is difficult to avoid the impression that a miracle confronts us here, quite comparable in its striking nature to the miracle that the human mind can string a thousand arguments together without getting itself into contradictions, or to the two miracles of laws of nature and of the human mind's capacity to divine them." -Eugene Wigner

http://www.dartmouth.edu/~matc/MathDrama/reading/Wigner.html


Have you read Bertrand Russell's 'Philosophy of Mathematics'? I had tried to read it awhile back. It discusses things in a similar tone.


To me, the most interesting observation about this came from computability theory, of all places.

To some extent, in CS we study completely arbitrary constructs--Turing machines. Turing machines, indeed, are just tools: they're a simple model of a computing machine that makes sense just based on existing technology. Or perhaps we study the lambda calculus which, while less arbitrary than a Turing machine, is still obviously a tool created by a mathematician to perform a particular task. Or maybe we study existing programming languages which are even more arbitrary and contrived than Turing machines.

All these models? Obviously just tools created by people, often for very specific tasks.

But then a surprising fact emerges: no matter which tool we use, we can accomplish exactly the same things! The set of programs we can run is the same--these arbitrary models are all Turing-complete. Somehow, the notion of Turing completeness is extremely robust against all the different models of computation we've come up with.

In fact, it turns out that any "reasonable" model is going to have the same power. And in this case, "reasonable" actually is reasonable--the main restriction is that the model can only do a finite amount of work at each step.

So while any given model is arbitrary, the class of problems they can solve isn't. Rather, this class naturally emerges from the models. It's a natural property of computation rather than merely a human contrivance.

A similar thing happens when you look at programming and logic. We all know about Curry-Howard: it turns out that writing a program and writing a proof are analogous actions. You can map proof from a logic system to a program in a programming language and vice versa. While both logic systems and programming languages are somewhat arbitrary themselves, the fact that they're somehow equivalent hints at some underlying principle.

And that is why I'm fascinated by category theory, which seems to embody this underlying principle. It unifies proofs and programs with categories in what is very much like an extension of Curry-Howard. But then it goes further, and uses exactly the same model to talk about a whole bunch of different things: set theory, topology and even theoretical physics! (And, doubtless, countless more I'm simply unaware of.) While any of these fields are again just tools--including categories themselves--it seems that they all reflect some deep underlying structure.

Now, of course, I haven't addressed a very important point: is this "underlying structure" a reflection of the universe or just a reflection of the human mind? Is it perhaps a reflection of the underlying structure of reasoning itself?

Coming up with a reasonable answer to any of these questions is going to take a lot more thought and a lot more writing, but I lean towards the latter. The underlying ideas--the minimal starting points that are enough to uncover the structures I talked about--are far too simple to just be an artifact of the human mind which is itself wantonly complex.

So perhaps I am a Platonist myself. I've never thought about it in those terms, but it seems broadly consistent with my general ideas about life, the universe and everything.


Category theory is beautiful because it's so unavoidable. We might critique Turing machines for being tied intimately to our particular mode of thought—both Turing tape and lambda calculus are a certain kind of specific, at least in comparison to category theory. (Not that I'm arguing that any computational models are somehow not touching on another kind of Platonic ideal)

Category theory arises from probably the simplest imaginable notions of "transformation". There's no way to not critique me for being anthropocentric, but it's hard to imagine an alternative theory where this doesn't exist.


This is a great post, but

> they're a simple model of a computing machine that makes sense just based on existing technology

Turing machines were invented before any working (almost-) universal computer existed.


Hah, yes. I was actually thinking of a literal tape made of paper, or something to that effect. As compared to the lambda calculus which is based entirely on logic rather than some allusion to the real world.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: