>Almost all programming jobs are actually software engineering jobs these days.
Not quite. Most all programming jobs are translation jobs, where you take some business requirement and put it into code. Which is why GPT models are going to render most of those jobs obsolete.
That depends on how trivial and stereotypical the tasks fulfilled by that role are. Sure, writing simple functions or accessing databases and loading or unloading form-based user interfaces should be easy enough tasks to specify easily. These will be automated first.
But designing a user interface that's intuitive or chooses sensible default values, especially one that isn't "typical" (where a business model or user use case already exists), or one that's not trivial to specify, or complex to integrate into a workflow -- these use cases will require iteration in order to specify useably. And revision of a specification is an ability where language-based specification tools like GPT have yet to prove themselves -- like activities such as interactive debugging or the performance tuning of an implementation.
How do you describe a task to ChatGPT that isn't yet well defined and still requires iterative refinement with the user to nail down? Until ChatGPT can perform Q&A to "fill in the blanks" and resolve the requirements in a custom task, a human in the loop will still be still needed.
Think of ChatGPT as a way to code using templates bigger than the ones used today. Initially it's templates will be smaller than the average function. It's hard to know how long before its templates will grow in size and complexity sufficient to build a full blown app, unless that app is pretty trivial and requires no customization. I'm guessing it'll be years before it creates apps.
I think of ChatGPT as a way to dynamically generate grails scaffolds.
A scaffold is basically a code template that gets you started with writing a particular class so you don't have to start from zero.
ChatGPT is an amazing scaffold generator, which isn't suprising because that is one of the defining features of an LLM but people extrapolate this and say absurd things that simply trigger my bullshit detector.
But ChatGPT hardly listens to my instructions and the severe case of Alzheimer's is quite annoying and even then, who is going to write the prompts? It's not like your project manager/customers actually write clear requirements or have any actual knowledge about how their infrastructure works.
Prompting an AI to write text is also quite a slow back and forth process that took me two hours for a basic class I could have written in 5 minutes but since the AI can answer with bullshit in five seconds I am now obsolete even though it needs multiple iterations and reading the code is the bottleneck and I practically did all the work. (Integrating the code into my project and doubling the lines of code because asking it to make the modifications and additions is just way too slow. Typing is just too damn fast to bother. Maybe teach your developers touch typing so they don't suck versus AI?)
Translation where context is whole apps, spread over dozens or hundreds of files, with complex interdependencies is also a form of integration. And chatGPT doesn't have the context size to do it, maybe the next GPT. But 4000 tokens is not enough, 50-100K tokens would be more like it.
Well.. yes, it is a stretch to use it for a large code base, but especially if they are all relatively small/medium sized files, a directory listing can get you pretty far to select the relevant files. You can also do a vector (embedding) search to find relevant functions/files and feed that into the prompt.
Also the OpenAI coding model code-davinci-002 has an 8000 token max not 4000 like text-davinci-003.
You can ask chat GPT now the following question "Given this Json as input, and this json as output, write code that transforms the input to output", and it will get it right. Try it out sometime, and then realize that like 80% of backend processing.
There just needs to be more targeted training, and some system built around it to write a complete service code, and you can replace a good number of jobs solely through that.
20 yoe. Never had to write code to transform json into json in a pure way. Maybe XML to XML … once. Good ol XSLT!
By pure I mean with no other requirements such as accessing another data store and running a bunch of rules decided by having several meetings with various people to find out what is actually required… a bit like what chatGPT doesn’t do.
Ya, it's tricky to map that comment to a useful scenario, but maybe getting a json document from an external service and saying "write a function that parses this json and returns this TS structure, but I think that would be a rare efficiency killer for me
Generally transforming the data is not the hard bit, it's specifying the shape. You're not replacing any meaningful jobs by getting GPT to automatically translate one schema to another, you're improving the productivity of existing devs.
Not always, a well written software contract will generate additional fees/income when analysts or businesses fail to stipulate all requirements.
On the point of >Almost all programming jobs are actually software engineering jobs these days.
Thats a very narrow view of Computer Science, ignoring how software and hardware can be exploited in air gapped scenarios, exploiting what the military have traditionally called Signal Intelligence, not something taught in any university or online as far as I'm aware of.
The undetectable metadata by human senses because of restrictions like our range of audible sounds, ability to detect tiny fluctuations of electromagnetic radiation, lack of knowledge of a devices ability, makes most computer science graduates somewhat blinkered and restricted in perspective and highly exploitable, with hubris being the number one attribute for most.
IMO Computer Science should be viewed more as a natural science, incorporating things like physics, biology, psychology, chemistry along with what's currently taught in a stereotypical CS course. I'm reminded of the fact that my language debugger is an excellent colour blind test operating in plain sight and when you become wise to these additional points of interest, you start to see the chaff from the wheat, whose good, whose not because Only the Paranoid Survive!
Central heating and hotwater systems, some home security lighting systems, some vehicles, many electronic devices with a cpu of sorts inside. I think its really quite common when you think about it.
Are you a bot trying to resource burn me? If a bot, would you even know you are a bot?
Not quite. Most all programming jobs are translation jobs, where you take some business requirement and put it into code. Which is why GPT models are going to render most of those jobs obsolete.