I've got access to GPT-4 at work. There have been many times that I'll encounter a bug while programming, I'll paste my code to GPT, tell it what my code is currently doing, and tell it what I actually want the code to do. It then explains what I'm doing wrong and what I can do to fix it. I've had success with this over 90% of the time. This saves me a significant amount of time that would have otherwise been spent hunting down solutions on Google, Stack Overflow, GitHub issues, etc.
I don't know what else to say other than that I would not willingly go back to life without GPT. The value speaks for itself to me.
How simple are your bugs? The bugs I usually have to fix at work involve edge cases around complex user interactions that are less programming bugs and more "well we didn't really think about this particular user interaction when developing this feature" - things that are usually simple 1 liner fixes but can take hours to figure out based on back and forth with product to figure out if it's even a bug and tracing where exactly the data/interaction in the code comes from.
Until I can paste in my entire codebase and the entire history of the product development process into GPT I don't see how it can help.
The bugs that it easily fixes, are generally the bugs whose errors i can copy/paste into google and find an immediate answer on stackoverflow
How is this possible? What do you guys do at work? I haven't had success with neither GPT-4 (did you build your own API calling tool for it? Do you just paste it in their Playground?) nor with GitHub Copilot in actually delivering anywhere close to 90% of the time. It usually misses a whole lot of context.
It feels like it would work for perfectly encapsulated small single purpose functions, which of course sounds great but in reality not many projects are structured like this.
I've used it to try and generate some rather small components in React / TypeScript myself, and what it did to arrays of refs with hook calls inside useState hooks initialization function, and the fact that I couldn't get it to fix its issues by doing what people suggest ("just copy paste the error"), or by trying to reason with it, made me not trust it so much. The output code is also pretty low quality in my experience and opinion.
I don't know what else to say other than that I would not willingly go back to life without GPT. The value speaks for itself to me.