I can't play with GPT-3 but when I play with GPT-2 I can easily trick it with counting games. It does well with 0,1,2,3,.... but things like 0,1,3,6,10, get poor responses. Is GPT-3 good at that?
Yes - reproducing fragments from various texts can look impressive, and could be useful in some applications - like creating comments on HN! (I give it a week before someone says "GPT3 has commented on HN and earned 500 Karma!!!"). But I don't think it can be a reliable problem solver or co-creator.
The fun bit is generalization. Create a pattern that hasn't been read before. Hard with GTP-3 because it's been given everything to read...
(the critique is: GPT-3 can in fact do all the things Marcus said it couldn't)