> As soon as you need reliable outcomes, such as certainty whether an erroneous state can arise in a program, whether a proof for a mathematical conjecture exists, or whether a counterexample exists, exhaustive search is often necessary.
Checking proofs is easier than finding proofs.
> The question then soon becomes: How can we best delegate this search to a computer, in such a way that we can focus on a clear description of the relations that hold between the concepts we are reasoning about? Which symbolic languages let us best describe the situation so that we can reliably reason about it?
These questions are largely answered. Or, at least, the methodology for investigating these types of questions is well-developed.
I think the more interesting question is co-design. What do languages and logics look like when they are designed for incorporation into new-fangled AI systems (perhaps also with a human), instead of for purely manual use?
Checking proofs is easier than finding proofs.
> The question then soon becomes: How can we best delegate this search to a computer, in such a way that we can focus on a clear description of the relations that hold between the concepts we are reasoning about? Which symbolic languages let us best describe the situation so that we can reliably reason about it?
These questions are largely answered. Or, at least, the methodology for investigating these types of questions is well-developed.
I think the more interesting question is co-design. What do languages and logics look like when they are designed for incorporation into new-fangled AI systems (perhaps also with a human), instead of for purely manual use?