This is all very interesting but it seems that we're just taking different views on what is the instance size. If it is the length of the theorem statement in some suitable encoding and the goal is to find a proof, of any possible length, then yeah, this is way too hard.
I'm taking the view that the (max) length of the proof can be taken as a parameter for the complexity because anything too long would not have any chance of being found by a human. It may also not be trusted by mathematicians anyway... do you know if the hardware is bug free, the compiler is 100% correct and no cosmic particle corrupted some part of your exponential length proof? It's a tough sell.
I'm talking about the proof length, not the length of the proposition. In CiC, you can have proofs like (for example) "eq_refl : eq <some short term that takes a very long time to compute> <some other short term that takes a very long time to compute>" (where both terms can be guaranteed to terminate before you actually execute them, so this isn't semidecidable or anything, just really slow). In other words, there exist short correct proofs that are not polynomial time to verify (where "verify" is essentially the same as type checking). So "find a proof of length n for this proposition" is not a problem in NP. You can't just say "ah but proofs that long aren't meaningful in our universe" when the proof can be short.
Could you give me a reference? This is not something I'm familiar with.
Can you claim that this equivalence proof is not in NP, without requiring this specific encoding? I would be very surprised to learn that there is no encoding where such proofs cannot be checked efficiently.
> Can you claim that this equivalence proof is not in NP, without requiring this specific encoding? I would be very surprised to learn that there is no encoding where such proofs cannot be checked efficiently.
There are encodings where such proofs can be checked efficiently, such as the one that enumerates every step. Necessarily, however, transcribing the proof to such an encoding takes more than polynomial time to perform the conversion, so the problems aren't equivalent from a P vs. NP perspective (to see why this is important, note that there is a conversion from any 3SAT formula to DNF with
a worst-case exponential size blowup, where the converted formula can be solved in LINEAR time! So the restriction to polynomial time convertible encodings is really really important in proofs of reductions in NP, and can't be glossed over to try to say two problems are the same in the context of NP-completeness).
I'm taking the view that the (max) length of the proof can be taken as a parameter for the complexity because anything too long would not have any chance of being found by a human. It may also not be trusted by mathematicians anyway... do you know if the hardware is bug free, the compiler is 100% correct and no cosmic particle corrupted some part of your exponential length proof? It's a tough sell.