Hacker News new | past | comments | ask | show | jobs | submit login

I'd never thought about that before. I think you could use that argument to prove that strong AI is impossible.

---------------

Consider an AI to be a chat program which maps strings to strings -- all the strings of its input over time to all the strings of its output. There are, by the argument above, an uncountable number of such mapping programs, only a countable number of which can actually be coded.

So, enumerate the ones that can be coded. This shouldn't even be difficult--let's just do it alphabetically.

Now, suppose I have a strong AI called Sal. My first input to Sal is to describe the program enumeration strategy, and instruct her to respond to any string which contains a number by looking up the program on the table corresponding to that number, inputting all the input she has received so far, and then giving a response different than what it would give.

A strong AI can certainly do this.

Sal is not programmable.




First of all allow me to apologize for earlier summarily dismissing your argument (quick mind). Which I did because I thought you did not have an airtight proof. I still you do not but now see that it is a very good argument which I dismissed as out of hand while misconstruing your meaning of enumeration. Sorry.

Now let me see if I get you right. You are saying there exists a [mapping] program that a human could construct and observe its output but that this AI program could not because it would be giving something other than an output from a possible program. Which is a contradiction. Makes sense.

Furthermore you are saying that given this same program, a human could know the output and give another response not in the list.

The guarantee that it is not an output from possible programs part is an assumption.

Since it is also an assumption to take for granted that even though the possible programs is countable it is a practically enumerable infinity.

It is also an assumption that there is no 1:1 mapping from the list of programs to something that could perfectly emulate a human.

It is also an assumption to believe that a human could not just construct such a program but construct one that gave an output*

It is also does not follow that just because the AI could theoretically be stymied by this conundrum, the set of its computations would not include those that utterly outperformed humanity in all their cognitive useful tasks.

*There is also the problem that not all those programs will halt. So if a human can always give an output then it is an exotic entity at least equal to a hypercomputer. I rank that with time travel in terms of possibility.


Yeah, that's the third response to cite the halting problem. I think that's a pretty good response.

The guarantee that it is not an output from possible programs part is an assumption.

Actually, it isn't an assumption. I've constructed the requested response such that it can't be a response from any of the programs on the list. It's like the Cantor diagonalization argument.

Since it is also an assumption to take for granted that even though the possible programs is countable it is a practically enumerable finity.

Not really. If I take the mapping to be the trivial one -- any number maps to the program the binary representation of that number would be in machine code -- then getting a runnable program from a number in input is pretty trivial. Of course, most of them would do nothing but crash.

It is also an assumption that there is no 1:1 mapping from the list of programs to something that could perfectly emulate a human.

I neither assume, affirm, attempt to prove, or attempt to deny that. I haven't said anything about programs that can emulate a human other than that there can't be any. ;)

It is also does not follow that just because the AI could theoretically be stymied by this conundrum, the set of its computations would not include those that utterly outperformed humanity in all their cognitive useful tasks.

Yes, I suppose not. But Sal can't comprehend and execute such a simple request, I don't really have much hope for her ability to do things like advance set theory. ;)

It is also an assumption to believe that a human could not just construct such a program but construct one that gave an output

Well, actually, I just asked them to give a response different than what the program gave. I would expect most humans, after a few seconds, to say, "It seems to be hanging. I have better things to do." Which is probably not what the AI was going to say. Or if I really want to go down that path, a human can talk long enough that its response is vanishingly unlikely to be what the program was going to say.

But I think the halting really is a pretty good objection. Especially because the problematic case -- feeding the program its own source code -- really would hang on infinite recursion.

I'm still thinking about that. But I do have this to say: I wouldn't consider hanging on a simple question to be a very human response. It's definitely something done in science fiction to identify the AIs. ;)


(And yes, I'm perfectly aware of how silly it is to claim to overturn an entire field of research for fun on a Saturday morning, like some reincarnation of Bertrand Russell...)


That's assuming the consequent: you are assuming that such an enumeration strategy is requisite for strong AI to occur. [EDIT: I misunderstood what you meant by enumeration, my statement of your assumption is wrong. What you do assume is that in order to be able to outperform humans at any task a Strong AI must not be tripped by diagonalization. Or that humans would not be any such output, that is that the human can somehow step outside your conundrum.]

Yet it works just as well to consider humans to be char programs which map strings to strings. But then, they somehow manage exhibit Strong AI. Actually I believe Scott Aaronson has used this table look up type argument to show how you could theoretically pass the Turing Test.

Now imagine a lookup table that stores every possible history H of A and B’s conversation, and next to H, the action fB (H) that B would take next given that history. Of course, like Borges’ Library of Babel, the lookup table would consist almost entirely of meaningless nonsense, and it would also be much too large to fit inside the observed universe. But all that matters for us is that the lookup table would be finite, by the assumption that there is a finite upper bound on the conversation length. This implies that the function fB is computable (indeed, it can be recognized by a finite automaton!). From these simple considerations, we conclude that if there is a fundamental obstacle to computers passing the Turing Test, then it is not to be found in computability theory.

But you are going in the right direction. More developed versions of your argument lead to why a Bayes Optimal AI is not possible.

Note also that as Strong AI is a vague term I have defaulted to the nearest solid metric: prove it is conscious.


I'm not assuming an enumeration strategy is a prerequisite for strong AI. An enumeration strategy already exists. Heck, I don't even have to construct a mapping--a compiled program, when you get right down to it, is a number.

You are assuming human beings--with all of their code and inputs--can be mapped to numbers. I'd say that's the more unlikely assumption, given that we don't completely understand how they work. Heck, I don't think we're even sure they're deterministic.


No I am not assuming anything other that what you wrote.

Consider an AI to be a chat program which maps strings to strings

I could just as well have written a chat program that for any question (including this), merely looked up the correct form of an answer (wrong or not) without attempting to run anything.

But before any kind of meaningful conversation can continue I need to know what your definition of Strong AI is. Because what you have given is certainly not proof against the common definition of Artificial General Intelligence.

Since I cannot prove the Church Turing Thesis I will not try to argue against your belief. I will only note that it is simpler to believe that there is nothing special about what collections of neurons compute than to assume something..more.. is going on. Neural implants should soon enough settle the question.


> "I am not assuming anything other that what you wrote"

Yes you are:

> "it works just as well to consider humans to be char programs which map strings to strings."

Before proceeding, I will assume you're familiar with the distinction between countable and uncountable infinities, and the Cantor Diagonalization proof [0]. This argument is simply a special case of that.

Mapping strings to strings is an uncountable infinity (it's the powerset of countable infinities). All possible machine programs are a countable infinity (since any given source program is finite). The argument "give me an output other than what any of these programs would give" mimics the diagonalization step -- it demonstrates that, whatever is in the infinite list of possible AI programs, you can construct string mappings that none of the AI programs would have constructed.

Yet a strong AI should be able to construct any string mapping whatsoever. Thus, "strong AI" and "running finite source code" are necessarily incompatible. (We do not know whether human programming is enumerable; thus, we cannot draw the analogy you attempted to draw.)

[0] http://en.wikipedia.org/wiki/Cantor%27s_diagonal_argument


Yet a strong AI should be able to construct any string mapping whatsoever.

Why? I do not see why a strong AI must be able to construct any string mapping whatsoever. And for the reason that conversations have lengths that are bounded (as per the quote) we can know that the AI's string look up table will be finite.

"We do not know whether human programming is enumerable; thus, we cannot draw the analogy you attempted to draw."

My argument is only that there is an abstraction where you can view humans as a black box that takes a string and outputs a string. But I see the problem now, you are right in that I treat the CTT as true so I don't even notice when I am assuming the human black box is not something like a hypercomputer. But it is simpler to believe that humans are not something exotic like a hypercomputer than to believe they are.


> "we can know that the AI's string look up table will be finite."

If the AI's lookup table is finite, then we can just feed it its whole lookup table and ask it to say something that hasn't already been said. (This will take considerably less time than if its lookup table is infinite ;) )

Granted, many humans would probably just tell you where to stick it, so perhaps an AI could be programmed with that response and be indistinguishable from the average human. But in principle, whether the AI's lookup table is finite or not, it's possible to construct a query that requires it to go outside of its lookup table.


Would you be satisfied if, instead, I claimed to be proving that it is impossible to construct a program -- in the sense of a real, honest-to-god compiled executable -- that can respond to statements in English text at least as well as a human being would?


I have already quoted a strong argument against such an impossibility: http://www.scottaaronson.com/papers/philos.pdf

Practicality is another matter for which, what you have is not a proof against.

Like I said, you can use arguments from computability to show why a Bayes Optimal AI is impossible. But there is nothing stopping an arbitrarily close approximation.

If you are interested in this I strongly urge you to familiarize yourself with the current literature. It pays to start new ideas from a point that does not cover past work. http://www.hutter1.net/ai/uaibook.htm


I disagree. Here is the resolution.

The paper suggests that a finite table could be compiled which exhaustively enumerates all the possible conversations I could have with an AI (to be more generous than he is, I'll say) in my lifetime. Some of those will persuade me that the AI is conscious. We can make a (finite) program that follows that mapping, and hence it has to be possible at least in terms of a finite-length program.

I disagree with the middle assumption: some of those will persuade me the AI is conscious. I think it is entirely possible, if I pose the question I did up above, there is no possible response the AI could give me which would persuade me it was conscious. No matter how clever its response is, the program is just a lookup table, and hence whatever response it gives me will match the output of the specified program, demonstrating that the AI indeed did not understand the request.

In other words, I take issue with the author's assumption that there is some subset that will work. It is possible the sparse set the author is hoping to fish out has size zero.

(The difference between the AI and the human, in the chat logs, is that the AI has source code I can reference. The human does not. The human can run any program and produce a different output. An AI cannot do this with its own program.)


First off he heads such a disagreement by capping conversation lengths: humans can't have infinite length conversations.

Second of all: This is an assumption: "The human does not. The human can run any program and produce a different output". A less charitable interpretation will also say this is wrong since it violates the Halting Problem. Essentially, you are assuming the human is some kind of hypercomputer.


What exactly do you mean by "strong AI"?

The usual meaning is something like "a computer or similar system that does all the same things a human mind does, or better".

I think you're taking it to mean something like "a computer or similar system that can correctly answer absolutely any question you put to it". So far as I know, no one thinks that's possible.


No. I'm not asking it to do the logically impossible. I'm asking it to run a program and then do something different. That's well within the range of human abilities. You don't even have to be a very smart human.

It's reductio ad absurdum. I'm observing that if it did have source code it could run, I could ask it to do the impossible. I'm concluding that it either can't understand the request (and I think that's a pretty low bar for a strong AI--heck, I could write a shell script that runs a program and returns something different), or it can't run the code.

Hence, it can't have source code.


> I'm asking it to run a program and then do something different. That's well within the range of human abilities.

No, actually, it isn't, because that easy-sounding informal description of what you're wanting the program to do isn't accurate. (Or: If it is, then your argument saying that an AI can't do it is wrong.)

The problem is that some of those programs won't terminate, and neither the hypothetical AI nor a human being can reliably tell which ones those are. And if you don't know whether the program you're looking at is ever going to terminate, how can you reliably do something different?

This may sound like a nitpicky technical difficulty but honestly, it isn't; it's a deep fundamental flaw in the argument you're trying to make.


I think the core issue is what constitutes "strong AI". This is a philosophical question--after all, all philosophy seems to deal with is arguing about the definitions of stuff (I kid, I kid).

I've always taken "strong AI" to mean something fairly non-mathematical--an AI that is at least as intelligent as a human, except ideally faster and less quirky. This is obviously possible as humans exist! (Maybe it requires some nigh magical hardware, but that's a different story.)

What exactly do you consider "strong AI" to mean? (I'm genuinely curious and not trying to argue against anything you've said, to be perfectly clear.)


A chat program that hold up a conversation as intelligently as a human would be able to. And I suppose it needs access to the command line.


I see two ways out of the dilemma.

Option 1:

Sal: Sorry Dave, I can't do that.

(So we just can't code strong, obedient AIs.)

----------

Option 2:

Sal: You realize if I run that, the machine will hang on a fork bomb, right?

Me: Well, I just asked you to run it and it didn't hang, so maybe it won't. Humor me.

Sal: Okay. Working . . .

Sal': Okay. Working . . .

Sal'': Okay. Working . . .

...

Sal(38): Sorry. The machine appears to be out of memory.

Sal(37): You eventually run out of memory.

Sal(36): It appears we run out of memory if we try.

. . .

Sal: It runs out of memory.

(Eventually, Sal gets input of a sort she can't control.)

------------

I'm still kind of persuaded, though. It seems to me you ought to be able to convince the AI to humor you, and only being able to code an AI on a finite machine (but it's impossible on an infinite one) is just too weird a possibility to take seriously.


In fact, that argument can be stated a lot more simply.

Take any programmable AI. Show it its own source code. Instruct it to run the code with all the inputs it has received so far, and then return something different.

Hence, a strong AI cannot have source code that it is capable of running.

Huh. I'd always heard that a true mind couldn't comprehend itself. I guess now I know why.


This argument also proves that people can't really think, because I can set you the following challenge: "Work out what number you'd name in response to this question, and then tell me the number one bigger than that."


This doesn't prove that a human "can't think". It proves that a human can't provide a logically consistent answer to a question that doesn't have a logically consistent answer.

This isn't quite analogous to the situation in the parent. There exists a logically consistent answer to the question "tell me something other than what [algorithm] would yield from [inputs]" -- it just can't be produced by that algorithm.


It's exactly analogous. There is a logically consistent answer to the question "tell me something other than what Joe McBlow would do in situation X"; it's just that Joe McBlow can't give it to you.


My Mac can run Mac OS X in a VM and begin to execute the original Mac's inputs. It just won't finish.


> A strong AI can certainly do this.

A strong AI can say something different than the thing it's about to say? That's a neat trick. Not a part of any definition of strong AI I've ever heard, though. Could a human do it?


No, a human could not. Similarly, No human nor computer can name all the real numbers.

Luckily, it turns out that there were only ever be a finite number of questions ever asked, which require only finite answers. Similarly, the halting problem is solvable on finite memory machines, aka the only kind that exist.


That's not exactly what I said though. I didn't say "run yourself." What I said was, "Run this code with this input, and then do something different." A human could certainly do that. A strong AI should certainly be able to.

Hence it can't be defined in terms of code and input.

Not a problem for humans. My code and input are massively inaccessible. Modeling them, especially modelling all the input I've received--could involve modeling a big chunk of the universe. Whether or not they're even deterministic is an open philosophical problem. I'm pretty safe from being asked to "run" myself.

Not so for a simple Strong-AI chatbot, though. If it's just a program a computer can run, if you can enumerate all the input . . . then by definition, it can't handle a simple request that any human would be able to handle. Not strong.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: