> if we want to usefully operationalize the question of whether computers are capable of understanding we should jettison the consciousness question and focus instead on questions like this: Do computers process physical information in broadly the way human brains are processing it when humans are doing what we call understanding?
The writer jettisons consciousness in order to focus on something we can actually argue about, but then proposes a highly anthropocentric framing. I appreciate that, but I wonder if we have to be that narrow in our framing. I'm not well read on the subject, but it seems to me we should be able to come up with something a little more forgiving. Why is human brain processing so important? If an AI understands in its own way, and gets better results than any human, wouldn't we accept it as understanding?
The writer jettisons consciousness in order to focus on something we can actually argue about, but then proposes a highly anthropocentric framing. I appreciate that, but I wonder if we have to be that narrow in our framing. I'm not well read on the subject, but it seems to me we should be able to come up with something a little more forgiving. Why is human brain processing so important? If an AI understands in its own way, and gets better results than any human, wouldn't we accept it as understanding?