I don't feel like we are in the waning days of the craft at all. Most of the craft is creating an understanding between people and software and most human programmers are still bad at it. AI might replace some programmers but none who program as a craft.
"Chess engines might get better than some chess players, but none who play Chess as a craft." Do you think people in the 90s thought this? Probably...
In the article, the author mentions that Chess centaurs (a human player consulting an engine) can still beat an engine alone. But the author is wrong. There was a brief period a while ago when that was true, but chess engines are so strong now that any human intervention just holds them back.
I've been programming 30+ years, and am an accomplished programmer who loves the craft, but the writing is on the wall. ChatGPT is better than me at programming in most every way. It knows more languages, more tricks, more libraries, more error codes, is faster, cheaper, etc.
The only area that I still feel superior to ChatGPT is that I have a better understanding of the "big picture" of what the program is trying to accomplish and can help steer it to work on the right sub-problems. Funnily enough, is was the same with centuar Chess; humans would make strategic decisions while the engines would work out the tactics. But that model is now useless.
We are currently enjoying a time where (human programmer+AI > AI programmer). It's an awesome time to live in, but, like with Chess, I doubt it will last very long.
Chess is a closed problem. Whereas software development very much isn't.
You will also have to provide a source for 'chess engines are so strong now that any human intervention just holds them back', a cursory search suggests this is by no means settled.
Yes, the rules of chess are simpler, which is why all this happened many years ago for chess.
https://gwern.net/note/note#advanced-chess-obituary -- here is a reference about centuar/advanced chess. The source isn't perfect as the tournaments seem to have fizzled out 5-10 years ago as engines got better and it all became irrelevant. Sadly this means we don't have 100 games of GM+engine vs. engine in 2023 to truly settle it but I've been following this for a while and I have a high confidence that Stockfish_2023+human ~= Stockfish_2023.
I think closed vs open problems are not simply different in magnitude of difficulty but qualitatively different. When I'm programming most of the interesting things I work on don't have a clear correct answer or even a way of telling why a particular set of choices don't get traction.
I guess it's possible that just being "smarter" might in some cases get a better solution from a seeies of text prompts but that seems too vague an argument to hold much water for me.
> It knows more languages, more tricks, more libraries, more error codes, is faster, cheaper, etc.
True up until the point that you want to do something that hasn't really be done before or is just not as findable on the internet. LLMs only know what is already out there, they will not create new frameworks or think up new paradigms in that regard.
It also is very often wrong in the code it outputs, doesn't know if things got deprecated after the training data threshold, etc. As a funny recent example, I asked ChatGPT for an example using the openAI nodejs library. The example was wrong as the library has had a major version bump since the last time the training data was updated.
> The only area that I still feel superior to ChatGPT is that I have a better understanding of the "big picture" of what the program is trying to accomplish and can help steer it to work on the right sub-problems.
Which probably is based on your general experience and understanding of programming in the last 30+ years. As I have said elsewhere, I really don't think that LLMs in their current iteration will be replacing developers. They are however going to be part of the toolchain of developers.
> It also is very often wrong in the code it outputs, doesn't know if things got deprecated after the training data threshold, etc
Today I asked it a question and it was wrong.... then it ran the code, got the same error as me, and then fixed it (and correctly explained why it was wrong), without me prompting further :)
Really though, how long until that training update goes from every so often, to constant. Now that half the internet is feeding it information, it doesn't even need to scour other sources -- its becoming its own source, for better or worse.
I have been programming 30+ years, and not two days ago looked at a problem I've been dealing with since before 2019, and went "this would be easier if I changed methods" and mitigated the issue in three hours from an airplane.
Programming is only superficially about code. The trick is really figuring out how to approach problems.