Hacker News new | past | comments | ask | show | jobs | submit login

There's a lot of fine distinction, and a hell of a lot of study, of these kinds of questions. One thing you might ask of your proofs is that they are immediate—which is to say, they can be (a) grasped and (b) believed, at least in principle, after only a finite amount of work. Further, you should be able to prove that your proof has these properties.

Without these properties, it's hard to call something a proof since it might simply go on forever and never resolve the question in the way you wish it would. This is absolutely the realm of things like the Halting Problem.

So 200TB of data plus a short proof that the search completed could at least in principle be immediately checked in finite work. On the other hand, the mere program which generates this data could not be unless it also comes with a proof of its own termination. We have no general process to generate these, so you must create a specific one for your generation program.

If it's cheaper to just spit out 200TB than communicate why you're sure that your program won't spin, then the better proof is 200TB long.

---

As a side note, this is sort of odd sounding because we're implicitly saying that "small is better" in proofs. In an ultimate sense this is probably true---if I had before me all of the proofs of something I would want to order them and pick the smallest.

But in practice, we cannot enumerate proofs and must expend energy to achieve new ones. Thus, there are at least two kinds of energy optimization which prevail. The first is how much effort is required to achieve a proof and the second is how much effort is required to communicate/transmit the proof to someone else.

This 200TB proof might be well-optimizing the search question and it might also be failing on the communication question.




Once you know the answer, you say "Take this program and run it with n=7825". That is your compact finite proof, not the untested program. You'd don't have to meta-prove halting, you just give it the required CPU hours.


While I think what your saying is true, it's frowned upon. It's saying "I've discovered a truly marvelous proof and its at the bottom of this cave". We've dodged the infinity risk, but not the resources involved in communication risk.

So it's kind of just crap UI at this point. I think that given that proofs are essentially social objects, this is a pretty big negative on that style of proof even if it doesn't totally sink the ship.


The communication is easy. You show someone your nice compact program, and they don't have to run it to understand it.

A 200TB proof is literally impossible to communicate to a human. Where is the benefit to that formulation?

The huge version is not better for a machine either. 200TB is a lot of storage, and you could rerun 30000 CPU-hours of calculations in under a month on a single semi-high-end 4U server.


But if I don't trust you then your program and limit can only be verified by my execution of it. This might be incredibly or even in feasibly expensive and unless you've also got a resource bound proof then I might be sunk.

On the other hand, code which verified a fully expanded proof may be very simple and easy to check/trust/believe.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: