Hacker News new | past | comments | ask | show | jobs | submit login
Alan Turing’s “Can Computers Think?” Radio Broadcasts Re-Recorded (aperiodical.com)
136 points by sohkamyung on Jan 4, 2018 | hide | past | favorite | 33 comments



Looking forward to listening to this recording; I'm always telling people interested in the philosphy of AI to read Turing's papers, as, like the author says, they are surprisingly relevant.

Have to comment on this though:

" Turing then makes one firm prediction, that by the end of the 20th century computers would be able to answer questions in a manner indistinguishable from a human being – this is the famous Turing test. Turing’s prediction may have been a couple of decades early, but with the rise of digital assistants I would have to say he was completely right. "

The bit I've italicized is nonsense in an article by someone studying Turing's work. The Turing test is clearly designed as a pragmatic definition of a sufficient test for AI, and assumes a sophisticated human judge. The idea that just because we are building digital assistants we are some how close to passing the Turing test and making genuinely thinking machines doesn't hold up at all.

I don't think that argument would have impressed Turing, and I'm sure he'd rather we say his prediction was wrong than introduce confusion here. We have so much confusion on this issue, which Turing tried so hard to dispel, it's a pity to see more.


Indeed, the virtue of the Turing test is that it's a very general framework – the freedom of the framework allows us to test the AI using any communicational method at hand. One of such is Winograd Schemas [1] – a test of commonsensical reasoning, in which the current conversational AIs spectacularly fail at.

[1] https://cs.nyu.edu/faculty/davise/papers/WinogradSchemas/WS....


Btw. here is a sketch of an embedding like that:

[Session starts]

<Challenger> Hi.

<Judge> Hi. Now, listen to me. My job here is to judge whether you are a human or an AI. I also know that your job is to try and persuade me to believe you are a human, no matter which one you really are. I happen to have a few questions that are easy for humans and hard for AIs. So if you are a human, answering them shouldn't be a problem to you, but if you avoid answering, I'm going to judge that you are an AI, no excuses. Here is the first question: ...

[A pattern of Winograd schema challenges ensues.]

So as we can see, the Turing test doesn't have to be any random chitchat nor it doesn't have to pretend to be "normal human-to-human-like discussion", as the Judge has the ability to participate and steer the conversation and zero in on the areas that are most likely to expose the AI.

However, if the human participants don't have any penalty from being labeled as AI, this doesn't obviously work, as they don't have any pressure to prove anything. So this kind of an embedding wouldn't work with "imitation game" -like rules, but it would work with "prove your human-ness" -like rules.


Not to detract from the logic, principle, or arguments here. But if someone said that kind of thing to me I would most likely respond with "Shutup-you talk too much!" I feel like a threat would need to be included to make a rational human incentivized to respond to such an arcane and winding dialogue, albeit a request..


If you classify say 50% of people as AI's and 50% of AI's as people then those AI's passed the Turing test.

So, you are limited to questions that most people will answer correctly. Further, if you find some unusual question that works today someone can just add it to the program for next time.


>If you classify say 50% of people as AI's and 50% of AI's as people then those AI's passed the Turing test.

And if your version of the test is "heads it's an AI, tails it's human" then any AI's that are classified as human will have "passed the Turing test."


I don't think you understood.

The original test specifically had exactly one human and one AI. So, if the judge is forced to do a coin flip that really is success. If the judge does a coin flip because they are lazy then that's not a Turning test.


Did you check the link about the Winograd schema challenge? The questions test common sense reasoning, and are very easy for humans to answer. An example:

The trophy doesn't fit into the brown suitcase because it's too large. What is too large? A: The trophy B: The suitcase


Coding a specific solution to questions in exactly that format is not inherently difficult.

If you keep writing specific solutions to this class of questions it's eventually had to come up with a simple question that's still unknown.

Further you can't get around that by making ever more complex questions because humans will eventually start messing them up.


Then you can just drop "imitate human" part. We don't need that anyway, except for some niche applications.


But the "imitate human" part, as you put it, provides a simple framework & easy to validate testing protocol which is flexible enough to cover the above example and many others.

That's why its useful.


I feel we could have reached that point by now, for instance is there was a Turing Race a la the Space Race. The driving factor though is we don't specifically need that from technology. Even when we get to that point, there's tons of reasons we will often want to easily distinguish whether we are communicating with someone vs something. Being distinguishable doesn't mean an AI is any less effective or easy to communicate with.

And who knows, perhaps this tech is already in the wild - by definition wouldn't it be fooling us?


Computers have passed limited versions of the Turing test, as in one person was mistakenly classified as AI and an AI classified as a person. Now, I am not saying we could have an expert chat with one for an hour on any topic and still be fooled, but people have already chatted with an AI without noticing it was an AI an not a person.

AGI is a different problem. But, faking AGI to the point someone can't tell is more of a philosophical question than a failure of the Turing test.


people have already chatted with an AI without noticing it was an AI an not a person

I'm sure this have worked in some limited context where the human was not actively trying to figure out if it was a human or AI. I mean, I have sometimes at a distance mistaken a mannequin for a human - that does not mean mannequins are now indistinguishable from humans.

In the Turing tests the judge is active trying to discern if they are talking to a (adult, literate, mentally sound) human or an AI. AFAIK no AI have ever passed this. (Of course it is easy for a human to impersonate a bad AI by answering erratically, but that doesn't really prove anything.)


I'm not sure that masquerading as a normal human being is any kind of a bar for an AI to pass. How about, passing for an expert in a field e.g. medicine, investing, navigation? I'd much rather see my AI certified that way.


Such a specialized AI would probably be more useful in practice, but that is not really what the Turing test is concerned with. The Turing test is a behaviorist approach to deciding the question if a computer "can think".

In any case, it think it is much easier to create an AI specialized in a particular domain rather than a general "human-like" AI.


There's a big dose of ethnocentrism (species-centrism?) in that. Why do we even want an AI that has human foibles, to the point its apparently as fallible as a human? Why is that part of the definition of 'thinking'?


You, and ideally everyone else commenting on this stuff on HN, should just have a very quick read of Turing's original paper before criticizing. It is very well thought-out:

Alan Turing, 1950: "The game may perhaps be criticised on the ground that the odds are weighted too heavily against the machine. If the man were to try and pretend to be the machine he would clearly make a very poor showing. He would be given away at once by slowness and inaccuracy in arithmetic. May not machines carry out something which ought to be described as thinking but which is very different from what a man does? This objection is a very strong one, but at least we can say that if, nevertheless, a machine can be constructed to play the imitation game satisfactorily, we need not be troubled by this objection."

The Turing test is a proposed test of sufficiency for thinking; not a test of necessity; its really framed pretty well in his original paper.


I get it; the test is sufficient. But I remain troubled, as I'm not sure I even want an AI that thinks in that way (the same as a human). I make no more claim than that; the Turing test is a little interesting and quaint; but surely much better tests could be conceived of.


Whether we want a computer which can think like a human is a separate question. The Turing tests is only concerned about whether we have such a computer.

Of course you can redefine the word "thinking" however you want, and state that a GameBoy can think according to your definition. That does not really teach us anything though. The point of the Turing test is we have no objective definition or measure of whether some entity "can think like a human" or not, so the best we can do is to test if it appears to think like humans. That is not saying anything about whether is is "good" or "bad" to think like a human. I am certainly happy that computers and machines in general does not think like humans!


It's still a bar that no AI has passed yet. It's not hard to steer the discussion into directions where the difference between a human and an AI becomes evident. This is likely to be important because a common-sense reasoning ability that the current AIs lack is likely to also be useful in more specialized fields.


Yeah, I think digital assistants have not been built to pass a turing test, i.e. to have full human communication skills. They've just been marketed that way.


In his paper, "Computing Machinery and Intelligence" [1], I found Turing's focus on the storage capacity of a machine as much as the computation speed of a machine to be interesting. In particular, his prediction specifies a concrete expectation of storage capacity, but not of computation speed:

"I believe that in about fifty years' time it will be possible to programme computers, with a storage capacity of about 10^9, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent. chance of making the right identification after five minutes of questioning."

Which would be 125 MB in the year 2000. A good personal computer in the year 2000 would have 512 MB of memory and an 80 GB disk [2], while the top supercomputer [3] had 6 TB of memory and 160 TB of disk storage [4].

"Of course the digital computer must have an adequate storage capacity as well as working sufficiently fast."

[1] http://phil415.pbworks.com/f/TuringComputing.pdf

[2] http://www.topdesignmag.com/top-performance-computer-looked-...

[3] https://www.top500.org/list/2000/11/

[4] https://en.wikipedia.org/wiki/ASCI_White


Well, and some 6 years later computers with 1GB of memory were common. That's not too much error for a prediction made in 1950.


As much as I admire Turing (I've often wondered what our world would be like if he hadn't died as young as he did), I've never been that convinced of the value of the Turing Test.

Given a choice between

1) software that can carry on a convincing conversation about sports, politics, music, movies, current events, etc. and

2) software that is obviously an AI that can do medical diagnosis and treatment planning better than 90% of human doctors,

and I'll take number 2 in a heartbeat.

In the future an AI that can do both will probably be developed, but right now I genuinely believe more effort should be spent on solving problem #2.


My guess is that Turing thought that you wouldn't need to make such a choice, and that a robot that could analyze the intricacies of medical diagnosis could surely be able to talk about the weather. It's much like you don't have to make a choice between your program being able to do medical diagnosis and the program having a full-color graphical interface (or even a touch interface on your phone, these days).

In other words, I think that Turing was simply wrong on this point. Though, it's not hard to imagine a future where it becomes cheap to make a program pass a Turing test, changing our expectation of what the user interface for a computer should be like.


When building chatbots, you face this choice. You can either try to poorly impersonate a human, or you can cop out that you're a chatbot and tell people what you are capable of assisting them with. One leads to frustration for the user, the other to potentially useful outcomes.


When I get pulled over I will apply the Turing test to see if the officer is in a good mood, smart, or robocop.


On that note; I'm pretty sure Trump will fail the Turing test.


Some tother related historic broadcasts can be accessed here http://www.aiai.ed.ac.uk/~dm/dm.html at the home page of the late Donald Michie (who worked alongside Turing). Sadly the presentation he gave back in 2007 at Informatics in Edinburgh, which I was very lucky to attend, is missing.


"The question of whether a computer can think is no more interesting than the question of whether a submarine can swim." -- Edsger Dijkstra


Turing states a very similar thing in his "Computing Machinery and Intelligence paper" and goes on to form a more interesting question (the Turing test):

"If the meaning of the words "machine" and "think" are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, "Can machines think?" is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another"


I love this guy. His videos on Numberphile are great.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: