Hacker News new | past | comments | ask | show | jobs | submit login

If you classify say 50% of people as AI's and 50% of AI's as people then those AI's passed the Turing test.

So, you are limited to questions that most people will answer correctly. Further, if you find some unusual question that works today someone can just add it to the program for next time.




>If you classify say 50% of people as AI's and 50% of AI's as people then those AI's passed the Turing test.

And if your version of the test is "heads it's an AI, tails it's human" then any AI's that are classified as human will have "passed the Turing test."


I don't think you understood.

The original test specifically had exactly one human and one AI. So, if the judge is forced to do a coin flip that really is success. If the judge does a coin flip because they are lazy then that's not a Turning test.


Did you check the link about the Winograd schema challenge? The questions test common sense reasoning, and are very easy for humans to answer. An example:

The trophy doesn't fit into the brown suitcase because it's too large. What is too large? A: The trophy B: The suitcase


Coding a specific solution to questions in exactly that format is not inherently difficult.

If you keep writing specific solutions to this class of questions it's eventually had to come up with a simple question that's still unknown.

Further you can't get around that by making ever more complex questions because humans will eventually start messing them up.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: