Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is welcome because GPT-4 actually requires a few iterations of prompts to actually do its job now. Before it took no more than a prompt and one clarification to get a good output. Now it’s just a GPT-3.5-turbo that hallucinates slightly less.


Do you use the web app?

If so, could you please go back in your history and make a new chat with an old prompt which had an excellent response?

I am curious if you could see the degradation and share the example.


To your point, that we haven’t seen widespread examples of prompts that used to work but now don’t, seems telling about the accuracy of claims.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: