Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The article claims (in a screenshot without quoting sources, so take it for what it's worth) that "A recent blog post pointed out that GPT-3-generated text already passes the Turing test if you're skimming and not paying close attention".

This is certainly debatable, and I agree that it is pushing the limit a bit.

I think in the end, the "Turing test" was devised as a thought experiment, not as a final definition of AI. So I guess some freedom of interpretation is reasonable.



Well, as a thought experiment, it suggests a concrete scheme: it's a game where everybody is trying to do their best.

If it's a game where nobody cares, it's a stupid game, and results are meaningless.


Results are not meaningless if in the real world, everybody is skimming anyways.


>if you're skimming and not paying close attention

Also, if I'm drunk and reading nonsense I might not realize it.


It's a bend of the scenario if the judge is skimming and not paying close attention. This "pushes the limit".

It's a break of the scenario if they didn't go in with the goal of detecting fakery. This makes it useless as a "Turing test".


I agree with you that it brings things outside the original scope of the Turing test. I do find it interesting to observe that a metric based on casual observation can have value in a society where elections can be swayed by online fakery.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: