It absolutely is anthropomorphizing to claim "GPT-3's ability to
understand concepts" rather than simply calling it "reproduce, mix and match text from an enormous corpus". And we can totally legitimately compare to a jury of human observers' opinions on how well(/badly) the output generated relates to the inputs.
For the specific example the OP cited "War: like being eaten by a dragon and then having it spit you out"
then unless its answer to "Where were you in between being eaten by a dragon and before it spat you out?" is "in the dragon's digestive system" that isn't understanding.
And I'm curious to see it answer "Dragons only exist in mythology; does your analogy mean war doesn't exist either? Why not compare to an animal that exists?"
Anyone who have used GPT-4 for any amount of time knows it is definitely not "reproduce, mix and match text from an enormous corpus". Also what's the point of the question. GPT has been specifically trained to not answer this. But in any case here is the answer with instruct model which generally is more open ended:
Sure, I can compare it to a real animal such as bears. Just like dragons are depicted as fearsome and destructive creatures in mythology, bears are also portrayed as powerful and dangerous animals in many cultures. However, just as the existence of dragons is a matter of fiction, the concept of a "war on bears" would also be nonsensical.
In reality, the issue of war is complex and multifaceted, involving political, economic, and societal factors. It cannot be reduced to a simplistic battle between humans and a particular animal species.
Furthermore, the use of animals in analogies can be problematic as it may trivialize the gravity and complexity of real-world issues. Wars and conflicts have serious consequences for human lives, whereas animals do not possess the same agency or ability to wage war.
In conclusion, while dragons may serve as a metaphor for war in mythology, comparing war to a fictional creature diminishes the seriousness of the topic and fails to capture the complexities of real conflicts.
>Also what's the point of the question. GPT has been specifically trained to not answer this. But in any case here is the answer with instruct model which generally is more open ended:
It would demonstrate basic reasoning skills that weren't things one would "reproduce, mix and match text from an enormous corpus". Like the response you provided, which is meaningless word salad. It's a prima facie takedown of your post.
This is like people who hate poetry, insisting their bad poetry is good poetry. Why? Because who else is to say otherwise! Well, the good poets. The people that appreciate poetry will know the difference. Everyone else wont care, save for those invested in having to sell their bad poetry as good.
What has poetry to do with reasoning? You should think GPT as a terse person who refuses this kind of thing. Certainly there are people like that who have good reasoning skill but can't answer your question in a poetic way(I being one).
For the specific example the OP cited "War: like being eaten by a dragon and then having it spit you out"
then unless its answer to "Where were you in between being eaten by a dragon and before it spat you out?" is "in the dragon's digestive system" that isn't understanding.
And I'm curious to see it answer "Dragons only exist in mythology; does your analogy mean war doesn't exist either? Why not compare to an animal that exists?"