Over the weekend, I was trying to add a feature to a largely unfamiliar codebase. ChatGPT appeared to have knowledge of this codebase, so I asked it many questions as I worked, in order to understand the big picture. In hindsight, I'm pretty sure almost every answer ChatGPT gave me was wrong.
However, even with this outcome—I've had cases where ChatGPT was significantly more accurate—I think ChatGPT was helpful on net. The ability to propose ideas and talk through problems with a robot was really helpful when I got stuck, in a socratic dialog sort of way. Perhaps I could have gotten the same result from some sort of question-formulation exercise—all of the useful insights ultimately came from me—but I think ChatGPT's responses helped me follow my own reasoning, if that makes sense.
However, even with this outcome—I've had cases where ChatGPT was significantly more accurate—I think ChatGPT was helpful on net. The ability to propose ideas and talk through problems with a robot was really helpful when I got stuck, in a socratic dialog sort of way. Perhaps I could have gotten the same result from some sort of question-formulation exercise—all of the useful insights ultimately came from me—but I think ChatGPT's responses helped me follow my own reasoning, if that makes sense.