This isn't an example of a coding problem, but just yesterday we had a very difficult technical problem come up that one of our partners, a very large IT company couldn't solve. None of our engineers knew how to solve it after trying for a few days. I would say I'm an expert in the industry and have been doing this over 20 years as a developer. I thought I knew the best way to solve it, but I wasn't sure exactly. I asked ChatGPT(GPT-4) and the answer it gave, although not perfect in every way, gave pretty much exactly the method we needed to solve the issue.
It's a bit scary because knowing this stuff is sort-of our secret sauce and GPT-4 was able to give an even better answer than I was able to give. It helped us out a lot. We are now taking the solution back to the customer and will be implementing it.
A few additional thoughts:
1. I knew exactly what type of question to ask it to get the right answer (i.e. if someone used a different prompt maybe they would get a different answer)
2. I knew immediately that the answer it gave was what we needed to implement. Some parts of the answer were not helpful or misleading and i was able to disregard those parts. Maybe someone else would have to take more time figuring it out.
I imagine future versions of GPT will be better at both points.
this is exactly the kind of post that folks who say "oh - it wont take our jobs - our jobs are safe (way too advanced for silly AI)" - need to be reading. You're an expert, and it answered an expert question, and your colleagues couldnt do it either...after a few days...and this is just early days.
I don't think this is the take away. I'm in a similar situation as the GP, but the crux is that we still need people to make the decisions on what needs to happen - the computer 'just' helps with the 'how'. You need to be a domain expert to be able to ask the right questions and pick out the useful parts from the results.
I'm also not so worried about 'oh but the machines will keep getting better'. I mean, they will, but the above will still remain true, at least until we get to the point where the machines start to make and drive decisions on their own. Which we'll get to, but by that point, I/we will have bigger problems than computers being better at programming than I am.
I look at it differently. If what we've been writing can be replaced by a machine, that leaves us with coming up with more creative solutions. We can now spend our time more usefully, I think!
It's a bit scary because knowing this stuff is sort-of our secret sauce and GPT-4 was able to give an even better answer than I was able to give. It helped us out a lot. We are now taking the solution back to the customer and will be implementing it.
A few additional thoughts:
1. I knew exactly what type of question to ask it to get the right answer (i.e. if someone used a different prompt maybe they would get a different answer) 2. I knew immediately that the answer it gave was what we needed to implement. Some parts of the answer were not helpful or misleading and i was able to disregard those parts. Maybe someone else would have to take more time figuring it out.
I imagine future versions of GPT will be better at both points.