Hacker News new | past | comments | ask | show | jobs | submit login

A good point - I'm taking it as given that reasoning of any depth is more of an iterative process, with one thought advancing as a meta-cognitively guided feedback to the next until a conclusion is reached. One prompt->completion cycle from a language model wouldn't necessarily meet that definition, but I bet it could be a component in a system that tries to do so.

I aspire one day to find the free weekends and adequate hubris to build a benchtop implementation of Julian Jayne's Bicameral Mind with 1+N GPT-3 or GPT-neo instances prompting each other iteratively to see where the train of semantics wanders. (as I'm sure others have already)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: