Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's worth chasing that with gwern's critique of Marcus' critque: https://www.gwern.net/GPT-3#marcus-2020

(the critique is: GPT-3 can in fact do all the things Marcus said it couldn't)



I can't play with GPT-3 but when I play with GPT-2 I can easily trick it with counting games. It does well with 0,1,2,3,.... but things like 0,1,3,6,10, get poor responses. Is GPT-3 good at that?


yes. I have tried it with gpt-3 Q: what comes next in the series: 0,3,6, A: 9

Q: what comes next in the series: 0,3,6,9, A: 12


I think the question of "how much can it reason" can be made more specific as "how far away can it reason from the learned examples".

Increases in reasoning power should allow for much smaller usable models.


Yes - reproducing fragments from various texts can look impressive, and could be useful in some applications - like creating comments on HN! (I give it a week before someone says "GPT3 has commented on HN and earned 500 Karma!!!"). But I don't think it can be a reliable problem solver or co-creator.

The fun bit is generalization. Create a pattern that hasn't been read before. Hard with GTP-3 because it's been given everything to read...


I think that this is still reproduction. Try things like 1,A,3,C,5,E,7 or a,1,aa,2,aaa,3,aaaa




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: