I think ChatGPT is trained to speak in a certain tone. I played with other models before, and they seem to not have this issue, and they seem much less distinctive for me.
This is the thing that kills me - people who think they can reliably detect LLM output are fooling themselves.
At best - you can look for logical fallacies and false facts in the output, and use that to guess, but realistically - people are fucking bad at detecting it, including all these HN posters who keep claiming they can do it reliably...
Yes but the sort of logical inconsistency the machine has is idiosyncratic. There can be some types of contradictions few words apart that it doesn't pick up on. For example, asking it to sort a list into n mutually exclusive categories, it often repeats categories or has the same item in more than one category or completely omits an item or even sometimes lists the same item twice within the same category. You might catch bullshitters making up matters of fact, but no person is oblivious to error of basic task like that.