Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I would argue that current LLMs are passing the Turing test because many observers have a hard time distinguishing them from humans: just look at the difficulty many schools have in enforcing rules like "Not allowed to use LLMs for your homework". The teachers often (not always) can't tell, looking at a piece of text, whether a human produced it or whether ChatGPT or some other LLM produced it.

And that "not always" is the crux of the matter, I think. You are arguing that we're not there yet, because there are lines of questioning you can apply that will trip up an LLM and demonstrate that it's not a human. And that's probably a more accurate definition of the test, because Turing predicted that by 2000 or so (he wrote "within 50 years" around 1950) chatbots would be good enough "that an average interrogator will not have more than 70% chance of making the right identification after five minutes of questioning". He was off by about two decades, but by now that's probably happened. The average interrogator probably wouldn't come up with your (good) strategy of using counterfactuals to trick the LLM, and I would argue two points: 1) that the average interrogator would indeed fail the Turing test (I've long argued that the Turing test isn't one that machines can pass, it's one that humans can fail) because they would likely stick to conventional topics on which the LLM has lots of data, and 2) that the situation where people are actually struggling to distinguish LLMs is one where they don't have an opportunity to interrogate the model: they're looking at one piece of multi-paragraph (usually multi-page) output presented to them, and having to guess whether it was produced by a human (who is therefore not cheating) or by an LLM (in which case the student is cheating because the school has a rule against it). That may not be Turing's actual test, but it's the practical "Turing test" that applies the most today.





I think the TT has to be understood as explicitly adversarial, and increasingly related to security topics, like interactive proof and side channels. (Looking for guard-rails is just one kind of information leakage, but there's lots of information available in timing too.)

If you understand TT to be about tricking the unwary, in what's supposed to be a trusting and non-adversarial context, and without any open-ended interaction, then it's correct to point out homework-cheating as an example. But in that case TT was solved shortly after the invention of spam. No LLMs needed, just markov models are fine.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: