Hacker News new | past | comments | ask | show | jobs | submit login

Would you be satisfied if, instead, I claimed to be proving that it is impossible to construct a program -- in the sense of a real, honest-to-god compiled executable -- that can respond to statements in English text at least as well as a human being would?



I have already quoted a strong argument against such an impossibility: http://www.scottaaronson.com/papers/philos.pdf

Practicality is another matter for which, what you have is not a proof against.

Like I said, you can use arguments from computability to show why a Bayes Optimal AI is impossible. But there is nothing stopping an arbitrarily close approximation.

If you are interested in this I strongly urge you to familiarize yourself with the current literature. It pays to start new ideas from a point that does not cover past work. http://www.hutter1.net/ai/uaibook.htm


I disagree. Here is the resolution.

The paper suggests that a finite table could be compiled which exhaustively enumerates all the possible conversations I could have with an AI (to be more generous than he is, I'll say) in my lifetime. Some of those will persuade me that the AI is conscious. We can make a (finite) program that follows that mapping, and hence it has to be possible at least in terms of a finite-length program.

I disagree with the middle assumption: some of those will persuade me the AI is conscious. I think it is entirely possible, if I pose the question I did up above, there is no possible response the AI could give me which would persuade me it was conscious. No matter how clever its response is, the program is just a lookup table, and hence whatever response it gives me will match the output of the specified program, demonstrating that the AI indeed did not understand the request.

In other words, I take issue with the author's assumption that there is some subset that will work. It is possible the sparse set the author is hoping to fish out has size zero.

(The difference between the AI and the human, in the chat logs, is that the AI has source code I can reference. The human does not. The human can run any program and produce a different output. An AI cannot do this with its own program.)


First off he heads such a disagreement by capping conversation lengths: humans can't have infinite length conversations.

Second of all: This is an assumption: "The human does not. The human can run any program and produce a different output". A less charitable interpretation will also say this is wrong since it violates the Halting Problem. Essentially, you are assuming the human is some kind of hypercomputer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: