Anything beyond one off asks is pretty hit or miss at least for me on if what ChatGPT is telling me is correct or not. Write me a complex SQL query that does this, write a python script that will do that, show me the regex that will find these patterns in a string, all of those work really nicely and do save time.
When anything gets more complex than that, I feel like the main value it provides is to see what direction it was trying to approach the problem from, seeing if that makes sense to you, and then asking it more about why it decided to do something.
This is definitely useful, but only if you know enough to keep it in check while you work on something, or worse if you think you know something more than you actually do, you can tell ChatGPT it's wrong and it will happily agree with you (even though it was correct in that case). I've tested both cases: correcting it when it was really wrong, and correcting it confidently when it was actually right. Both times it agreed that it was wrong and regenerated the answer it gave me.
> I've tested both cases: correcting it when it was really wrong, and correcting it confidently when it was actually right. Both times it agreed that it was wrong and regenerated the answer it gave me.
This is the peril of using what really is fundamentally an autocomplete engine, albeit an extremely powerful one, as a knowledge engine. In fact, RLHF favors this outcome strongly; if the human says "this is right", the human doing the rating is very unlikely to uprate responses where the neural net insists they're still wrong. The network weights are absolutely going to get pushed in the direction of responses that agree with the human.
When anything gets more complex than that, I feel like the main value it provides is to see what direction it was trying to approach the problem from, seeing if that makes sense to you, and then asking it more about why it decided to do something.
This is definitely useful, but only if you know enough to keep it in check while you work on something, or worse if you think you know something more than you actually do, you can tell ChatGPT it's wrong and it will happily agree with you (even though it was correct in that case). I've tested both cases: correcting it when it was really wrong, and correcting it confidently when it was actually right. Both times it agreed that it was wrong and regenerated the answer it gave me.