Hacker News new | past | comments | ask | show | jobs | submit login

How is the impression of playfulness not a good faith interpretation?

You of course know that the model is not capable of thought or reasoning - only the appearance of them as needed to match its training corpus. A training corpus of completely human generated data. As such, how could anything it does, be anything but anthropomorphic?

Now, if this model were trained exclusively on a corpus of mathematical proofs stripped of natural language commentary, the expectation that you seem to have would be more appropriate.




> You of course know that the model is not capable of thought or reasoning

Do we know? It's the reverse Chinese room problem. :p


A good point - I'm taking it as given that reasoning of any depth is more of an iterative process, with one thought advancing as a meta-cognitively guided feedback to the next until a conclusion is reached. One prompt->completion cycle from a language model wouldn't necessarily meet that definition, but I bet it could be a component in a system that tries to do so.

I aspire one day to find the free weekends and adequate hubris to build a benchtop implementation of Julian Jayne's Bicameral Mind with 1+N GPT-3 or GPT-neo instances prompting each other iteratively to see where the train of semantics wanders. (as I'm sure others have already)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: