I've seen similar claims about GPT 3.5 and Copilot, so I won't hold my breath.
To quote GPT-4 paper:
"GPT-4 generally lacks knowledge of events that have occurred after the vast majority of its pre-training data cuts off in September 202110, and does not learn from its experience. It can sometimes make simple reasoning errors which do not seem to comport with competence across so many domains, or be overly gullible in accepting obviously false statements from a user. It can fail at hard problems the same way humans do, such as introducing security vulnerabilities into code it produces.
GPT-4 can also be confidently wrong in its predictions, not taking care to double-check work when it’s likely to make a mistake".
> I also asked it to write using non existent but plausible sounding APIs, and it flat out says "As of my knowledge cutoff
Ask it to write a deep integration with Samsung TV or Google Cast. My bet is that it will imagine non-existent APIs (as those APIs are partly unpopular and partly closed under NDAs)
To quote GPT-4 paper:
"GPT-4 generally lacks knowledge of events that have occurred after the vast majority of its pre-training data cuts off in September 202110, and does not learn from its experience. It can sometimes make simple reasoning errors which do not seem to comport with competence across so many domains, or be overly gullible in accepting obviously false statements from a user. It can fail at hard problems the same way humans do, such as introducing security vulnerabilities into code it produces.
GPT-4 can also be confidently wrong in its predictions, not taking care to double-check work when it’s likely to make a mistake".
> I also asked it to write using non existent but plausible sounding APIs, and it flat out says "As of my knowledge cutoff
Ask it to write a deep integration with Samsung TV or Google Cast. My bet is that it will imagine non-existent APIs (as those APIs are partly unpopular and partly closed under NDAs)