Hacker News new | past | comments | ask | show | jobs | submit login

What exactly do you mean by "strong AI"?

The usual meaning is something like "a computer or similar system that does all the same things a human mind does, or better".

I think you're taking it to mean something like "a computer or similar system that can correctly answer absolutely any question you put to it". So far as I know, no one thinks that's possible.




No. I'm not asking it to do the logically impossible. I'm asking it to run a program and then do something different. That's well within the range of human abilities. You don't even have to be a very smart human.

It's reductio ad absurdum. I'm observing that if it did have source code it could run, I could ask it to do the impossible. I'm concluding that it either can't understand the request (and I think that's a pretty low bar for a strong AI--heck, I could write a shell script that runs a program and returns something different), or it can't run the code.

Hence, it can't have source code.


> I'm asking it to run a program and then do something different. That's well within the range of human abilities.

No, actually, it isn't, because that easy-sounding informal description of what you're wanting the program to do isn't accurate. (Or: If it is, then your argument saying that an AI can't do it is wrong.)

The problem is that some of those programs won't terminate, and neither the hypothetical AI nor a human being can reliably tell which ones those are. And if you don't know whether the program you're looking at is ever going to terminate, how can you reliably do something different?

This may sound like a nitpicky technical difficulty but honestly, it isn't; it's a deep fundamental flaw in the argument you're trying to make.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: