We can do introspection, as the other commenter said, but the more basic part is that we can follow an iterated strategy, which GPT-3 is incapable of- it must predict the next letter in a strictly bounded operation. That's why it makes sense to think of it as a "babbler"; it is incapable of not saying the first thing that comes to mind.
However, when you give GPT-3 the opportunity to iterate on a strategy, by asking it to follow each step of the strategy sequentially, you can see its behavior become much more similar to basic human thought.
However, when you give GPT-3 the opportunity to iterate on a strategy, by asking it to follow each step of the strategy sequentially, you can see its behavior become much more similar to basic human thought.