Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It is so far ahead of even what the best IDEs do. For one, I have not seen GPT4 ever use non existent APIs. You don't need to carefully construct prompts. It tolerates typos to a good extent. You can just type a rough description and the output won't need cleaning manually. You might need to reiterate it to focus on some thing (like remove all heap allocations and focus on performance).


I've seen it use non existent APIs a lot. Working on a project that uses a dialect of python it told me it knew (Starlark) was like pulling teeth. It would tell me to use a python feature Starlark didn't have, I'd ask it to rewrite it without using that specific feature and it would with another feature Starlark didn't have access to, so I'd ask it to write the solution using neither and it would just give me the first solution again.


> For one, I have not seen GPT4 ever use non existent APIs.

Have you asked it to use any API that appeared after September 2021 (that's the cut off date for its data)?

Have you asked it to write code in less popular languages (e.g. Elixir)?

Have you asked it to write code for less popular or unavailable APIs (smart TV integrations)?


Yeah it was basically useless for an Elixir project I was working on. That will probably change at some point I’m sure.


I have used it to write Nim and Zig code (both not too popular languages).

I also asked it to write using non existent but plausible sounding APIs, and it flat out says "As of my knowledge cutoff in September 2021, I have no knowledge ...."

Ae you talking about GPT4 or the default ChatGPT?


I've seen similar claims about GPT 3.5 and Copilot, so I won't hold my breath.

To quote GPT-4 paper:

"GPT-4 generally lacks knowledge of events that have occurred after the vast majority of its pre-training data cuts off in September 202110, and does not learn from its experience. It can sometimes make simple reasoning errors which do not seem to comport with competence across so many domains, or be overly gullible in accepting obviously false statements from a user. It can fail at hard problems the same way humans do, such as introducing security vulnerabilities into code it produces.

GPT-4 can also be confidently wrong in its predictions, not taking care to double-check work when it’s likely to make a mistake".

> I also asked it to write using non existent but plausible sounding APIs, and it flat out says "As of my knowledge cutoff

Ask it to write a deep integration with Samsung TV or Google Cast. My bet is that it will imagine non-existent APIs (as those APIs are partly unpopular and partly closed under NDAs)


How do you know GPT4's cut off date...? I mean it says that, but it can totally be it "learned" its (supposed) cut off date from the GPT3.5 output all over the internet, right?


> How do you know GPT4's cut off date...?

"GPT-4 generally lacks knowledge of events that have occurred after the vast majority of its pre-training data cuts off in September 202110, and does not learn from its experience."

GPT-4 paper, page 10: https://arxiv.org/pdf/2303.08774.pdf


The model repeats it all the time "As of my knowledge cutoff date"


Yes, and this fact doesn't tell me anything, as I know LLM is completely capable to say things that aren't true.


That claim doesn't come from ChatGPT, it comes from OpenAI themselves.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: