The most difficult and time consuming activity of software engineering is not the production of source code as a textual artifact, but building a mental model of the software system - why it was built a certain way, how it may be extended, how it may not be extended, how to answer questions about the system, an understanding of the abstractions within it and how to use them. In particular, this understanding exists outside of the source code itself.
It's for this reason that I am skeptical of ChatGPT, which automates the most trivial part of software engineering - writing the code. GPT will not give you understanding, only the textual artifact (and whether or not this artifact is actually the product of some AI "understanding" is unclear).
You MUST use RAG (retrieval augmented generation) for ChatGPT to be generally useful for programming.
I use a ChatGPT-based script using RAG to work with code-bases. I include text documentation included in the repository, descriptions of application and folder conventions in the documentation folder, and file paths for source information in the augmented prompt. The documentation it creates is nearly as good as my own and I feed the output back into the documentation folder for even better understanding of the application.
I am working on an effective prompt to enforce overall implementation style and approach. ChatGPT strays toward system-agnostic and lower-level abstractions instead of application conventions. Conventions at a higher level of abstraction are a subtle but important aspect of the application theory.
While the underlying model may not have 'understanding', the iterative process of interacting with the model creates a context that in my experience has captured part of the 'area of interest' during a pair programming session, our interactions have created a unique context that means the model responds as if it has modelled an 'understanding', it is something more than simply a text record of our conversation.
I think this is true for any domain of knowledge in ChatGPT. Its knowledge is shallow in a certain sense, but a lot of things don't require deep knowledge. Shallow knowledge generated very quickly can still be useful.
I find ChatGPT to be very useful as a source of documentation. I can ask it to summarize a certain scientific topic for me, or how to do a certain thing in the Win32 API. I'm not going to take its output at face value, but it still speeds up the process of figuring out what I should research further, what function to call, etc.
That FoC episode was my introduction to the show and was completely fantastic. The discussion was interesting, but they also had a lot of fun with the format which I was not expecting.
Thanks for the link. I really like the bit that expands on the notion that premature optimization is not limited to run time optimizations but to, perhaps more so, abstraction optimizations.