Yeah. And while GPT-3 output is often nonsensical, it's nonsensical in the same way a good fraction of the general public writes on the Internet. In this way, it's close to passing the Turing test - not because its good, but because real humans on generic platforms[0] are really that bad.
--
[0] - By that I mean platforms that aren't strongly moderated for quality of discussions, or that don't focus on niches for which the set of people interested in them already has a higher than average discourse quality.
It was always inevitable though.