>negotiating the validity of competing logical arguments
ChatGPT would soon do that for you better than you do it yourself. That begs the question - what is the core competency of humans? I see so far only the one - the ability to discover, to dig further into unknown. I think though once we get ChatGPT a bit further, such an ability would be pretty easy to add, and it would do much better than humans as it would be able to generate, prune, evaluate, verify hypotheses much faster and at incomparably larger scale/numbers.
> ChatGPT would soon do that for you better than you do it yourself.
And this is based on..? LLMs suffer from all logical fallacies humans do, afaik. It performs extremely poor where there’s no training data, since it’s mostly pattern matching. Sure, some logical reasoning is encoded in the patterns, but clearly not at a sophisticated level. It’s also easily tricked by reusing well-known patterns with variations - ie using a riddle like the wolf-lamb-shepherd with a different premise makes it fall into training data honey pot.
And as for the main argument, to replace literally critical thinking of all things, with a pattern parrot, is the most techno-naive take I’ve heard in a long time. Hot damn.
you're listing today's deficiencies. ChatGPT didn't exist several years ago, and will be a history in several years.
>to replace literally critical thinking of all things
Nobody forces you to replace. The ChatGPT would just be doing it better and faster as it is a pretty simple thing once the toolset gets built up which is happening as we speak.
>it’s mostly pattern matching
it is one component of ChatGPT. The other is the emergent (as a result of simple brute-force looking training) model over which that pattern-matching is performed.
i already do it when it comes to the translation from the languages that i don't know, ie. almost all the human languages. Soon it will be doing other types of thinking better than me too.
For anything weight bearing, I really wouldn't trust it for translation. More than once I've gotten an output that had subtly different meanings or implications than the input. A human translator is going to be significantly more interrogatable here - just as a logical reasoner.
ChatGPT would soon do that for you better than you do it yourself. That begs the question - what is the core competency of humans? I see so far only the one - the ability to discover, to dig further into unknown. I think though once we get ChatGPT a bit further, such an ability would be pretty easy to add, and it would do much better than humans as it would be able to generate, prune, evaluate, verify hypotheses much faster and at incomparably larger scale/numbers.