Hacker News new | past | comments | ask | show | jobs | submit login

We can do introspection, as the other commenter said, but the more basic part is that we can follow an iterated strategy, which GPT-3 is incapable of- it must predict the next letter in a strictly bounded operation. That's why it makes sense to think of it as a "babbler"; it is incapable of not saying the first thing that comes to mind.

However, when you give GPT-3 the opportunity to iterate on a strategy, by asking it to follow each step of the strategy sequentially, you can see its behavior become much more similar to basic human thought.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: