The article claims (in a screenshot without quoting sources, so take it for what it's worth) that "A recent blog post pointed out that GPT-3-generated text already passes the Turing test if you're skimming and not paying close attention".
This is certainly debatable, and I agree that it is pushing the limit a bit.
I think in the end, the "Turing test" was devised as a thought experiment, not as a final definition of AI. So I guess some freedom of interpretation is reasonable.
I agree with you that it brings things outside the original scope of the Turing test.
I do find it interesting to observe that a metric based on casual observation can have value in a society where elections can be swayed by online fakery.
This is certainly debatable, and I agree that it is pushing the limit a bit.
I think in the end, the "Turing test" was devised as a thought experiment, not as a final definition of AI. So I guess some freedom of interpretation is reasonable.