This is my new capability- I am not a coder or a programmer and I can get things built, in my own time, at my own speed, solo.
Would it be better code if someone with 3 years of university and 5 years of coding practice did it? Yes, very probably, but the gap seems to be narrowing. Humorously I don't know enough about good code to tell you whether what I build with llms is good code. Sometime I build a function that feels magical, other times it seems like a fragile mess. But I don't know.
Do I know "javascript" or "python" or the theory of good coding practice? No, not currently. But I am building things. Things that I have personal, very specific requirements for.
Where I don't have to liaise or berate someone else. Where I don't have to pay someone else. Where I don't share the recognition (if there is any ever) of the thing, I and only I have produced- (with chatGPT, Gemini and most recently llama3).
Folks have been feeling superior for 70 years and earning a good living because they spoke the intermediary language of compute engines. What makes them actually special NOW is computer science, the theory- the languages, we have very cheap (and in the case of open source local models free) translators for those now. And they can teach you some computer science as well, but that is still time and practice.
I'm the muggle. The blunt. And I'm loving this glowy lantern, this psi-rig.
One of the lessons that one learns as a programmer is to be able to write code that one can later read back and understand. This includes code written by others as well as code written by oneself.
When it comes to production quality code that should capture complex and/or business-critical functionality, you do want an experienced person to have architected the solution and to have written the code and for that code to have been tested and reviewed.
The risk right now is of many IT companies trying to win bids by throwing inexperienced devs at complex problems and committing to lower prices and timelines by procuring a USD 20 per month Github Co-Pilot subscription.
You individually may enjoy being able to put together solutions as a non-programmer. Good for you. I myself recently used ChatGPT to understand how to write a web-app using Rust and I was able to get things working with some trial and error so I understand your feeling of liberation and of accomplishment.
Many of us on this discussion thread work in teams and on projects where the code is written for professional reasons and for business outcomes. The discussion is therefore focused on the reliability of and the readability of AI-assisted coding.
hmm.
I ended up with a few 750+ line chunks of js, beyond the ability of chatgpt to parse back at me. So my go-to technique now is to break it into smaller chunks and make them files in a folder structure, rather than existing inside a single *.js
So readability is an issue for me- even more so because I rely on ChatGPT to parse it back- sometimes I understand the code, but usually I need the llm to confirm my understanding.
I'm not sure if this scales for teams. My work has sourcegraph, which should assist with codebases. So far it hasn't been particularly useful- I can use it to find specific vulnerable libraries, keys in code etc, but that is just search.
What I really need is things like "show me the complete chain of code for this particular user activity in the app and highlight tokens used in authentication" ... - something senior engineers struggle to pull from our hundreds of services and huge pile of code. And so far sourcegraph and lightstep are incapable of doing that job. Maybe with better RAG or infinite context length or some other improvement there will be that tool. But currently the combined output of 1000's of engineers over years almost un-navigable.
Some of that code might be crisp, some of it is definitely of llm-like quality (in a bad way)- I know this because I hear people's explanation of said code and how they misremembered it's function during post mortems. Folks copy and pasting outdated example code from the wiki etc. ie making things they don't understand. I presume that used to happen from stackoverflow too. Engineers moving to llm won't make too much difference IMO.
I agree, your points are valid, but I see "prompt engineering" as democratization of the ability to code. Previously this was all out of reach for me, behind a wall of memorization of language and syntax that I touched in the Pascal era and never crossed.
12 hours to build my first node.js app that did something in exactly the way I had wanted for 30 years. (including installing git and vscode on windows- see, now I am truly one to be reviled)
The problem I currently have with AI-generated code, as an experienced programmer who can read and understand the output, isn’t that the code quality is bad, but that it’s often buggy and wrong. I recently recommended in my company that if copilot is allowed to be used, the developers using it must thoroughly understand every line of code it writes before accepting it, because the risk of error is too high.
Copilot may work for simple scripts, but even for that where it mostly for things right, in my experience it still introduced subtle bugs and incorrect results more often than not.
Would it be better code if someone with 3 years of university and 5 years of coding practice did it? Yes, very probably, but the gap seems to be narrowing. Humorously I don't know enough about good code to tell you whether what I build with llms is good code. Sometime I build a function that feels magical, other times it seems like a fragile mess. But I don't know.
Do I know "javascript" or "python" or the theory of good coding practice? No, not currently. But I am building things. Things that I have personal, very specific requirements for. Where I don't have to liaise or berate someone else. Where I don't have to pay someone else. Where I don't share the recognition (if there is any ever) of the thing, I and only I have produced- (with chatGPT, Gemini and most recently llama3).
Folks have been feeling superior for 70 years and earning a good living because they spoke the intermediary language of compute engines. What makes them actually special NOW is computer science, the theory- the languages, we have very cheap (and in the case of open source local models free) translators for those now. And they can teach you some computer science as well, but that is still time and practice.
I'm the muggle. The blunt. And I'm loving this glowy lantern, this psi-rig.