Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How does it come up with comprehensible sentences that make logical sense about everyday things without understanding what these things are in the real world or even understanding what logic is? I don't have a stack trace for you, but if you want to hang on "the proof's in the pudding" and say the output is evidence for what it's doing, here's Chat GPT's output to your question:

There are two fruits in this list: banana and watermelon.

So what is the difference between GPT 4 and Chat GPT? Is GPT 4 all of a sudden running a counting algorithm over the appropriate words in the sentence because it understands that's what it needs to do to get the correct answer? Can anyone explain how you would get from whatever Chat GPT is doing to that by the changes to GPT 4? And more to the point, how does Chat GPT get to the answer 2 without counting? Apparently somehow, because if it can't even count to 4, I'm not sure how you could call whatever it's doing 'counting'.



> How does it come up with comprehensible sentences that make logical sense about everyday things without understanding what these things are in the real world or even understanding what logic is?

This is begging the question.

> Can anyone explain how you would get from whatever Chat GPT is doing to that by the changes to GPT 4?

I don't know. I can't explain how a 2-year-old can't count but an 8-year-old can, either. 3.5 generally can't count[0]. 4 generally can.

You've really sidestepped the question. I suggest you seriously consider it. What's the difference between something that looks like it's counting versus actually counting when it comes to data it's never seen before?

Thanks for discussing this with me.

[0] Add "Solve this like George Polya" to any problem and it will do a better job. When I do that, it's able to get to 4.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: