Yes. Translating business requirements, customer context, engineering constraints, etc. into usable, practical, functional code, and then maintaining that code and extending it is so far beyond the horizon, that many other skillsets will replaced before programming is. After all, at that point, the AI itself, if it's so smart, should be able to improve itself indefinitely. In which case we're fucked. Programming will be the last thing to be automated before the singularity.
Unlike artwork, precision and correctness is absolutely critical in coding.
The tail end of programming will be the last thing to be replaced, maybe. I don’t see why CRUD apps get to hide under the umbrella of programming ultra-advanced AI.
Let me know when you can speak English to a computer and have it generate CRUD code that satisfies all engineering and design constraints. The AI will need to be dynamic enough to understand nuance, missing gaps in the requirements spec, have context on the application being built, able to suggest improvements on product design, know how to make changes through the same conversational interface, etc.
Accomplishing that is achieving general AI.
In the meantime, there are plenty of boilerplate ORMs and simplistic API template tools that make production of bog standard CRUD apps dead simple. Of course, they all have their drawbacks and trade-offs, and aren't always suitable. But I don't see the amount of software engineering work reducing as a result of these no-code, low-code tools, do you?
Probably not. People tend to think that tasks that make us think hard require general intelligence just because that's the tool we use to solve that problem. The AI doesn't have to be very good to be able to replace CRUD web app developers (that is, most of us).
As I see, the real challenge to solve is for it to be able to hold context and be able to communicate iteratively. Also, as you say find missing gaps. That's important. Other than that, you tell it what you want, it creates something and then you tell it to change things around. Which is, BTW, pretty similar to how it works with biological life based developers. Though as we're lazy, we like to clarify a lot of things up front (and either drive customers crazy or teach them that this is the way it works). If you have an AI that spits out code in a few minutes, it may not matter a lot.
Most of the programming jobs are indeed about making relatively simple stuff from standard components.
Let me know when you can speak English to a computer and have it generate CRUD code that satisfies all engineering and design constraints. The AI will need to be dynamic enough to understand nuance, missing gaps in the requirements spec, have context on the application being built, able to suggest improvements on product design, know how to make changes through the same conversational interface, etc.
Let me know when you find a single programmer who can do that reliably.
Is it that hard to do? Just design a solution that uses Alexa voice services to parse the vocal input via NLP and then invoke a lambda function to call a sagemaker or gpt-3 model to generate code. Granted it will take a little while to be perfect but are we really far from it?
Large chunks, yes, but all that means is that engineers will move up the abstraction stack and become more efficient, not that engineers will be replaced.
Bytecode -> Assembly -> C -> higher level languages -> AI-assisted higher-level languages
> engineers will move up the abstraction stack and become more efficient
Above a certain threshold of ability, yes.
The same will hold true for designers. DALL-E-alikes will be integrated with the Adobe suite.
The most cutting edge designers will speak 50 variations of their ideas into images, then use their hard-earned granular skills to fine-tune the results.
They'll (with no code) train models in completely new, unique-to-them styles--in 2D, 3D, and motion.
Organizations will pay top dollar for designers who can rapidly infuse their brands with eye-catching material in unprecedented volume. Imitators will create and follow YouTube tutorials.
Mom & pop shops will have higher fidelity marketing materials in half the time and half the cost.
History isn't a great guide here. Historically the abstractions that increased efficiency begat further complexity. Coding in Python elides over low-level issues but the complexity of how to arrange the primitives of python remains for the programmer to engage with. AI coding has the potential to elide over all the complexity that we identify as programming. I strongly suspect this time is different.
The space for "AI-assisted higher-level languages" sufficiently distinct from natural language is vanishingly small. Eventually you're just speaking natural language to the computer, which just about anyone can do (perhaps with some training).
The hard part of programming has always been gathering and specifying requirements, to the point where in many cases actually using natural language to do the second part has been abandoned in favor of vague descriptions that are operationalized through test cases and code.
AI that can write code from a natural language description doesn't help as much as you seem to think if natural language description is too hard to actually bother with when humans (who obviously benefit from having a natural language description) are writing the code.
Now, if the AI can actually interview stakeholders and come up with what the code needs to do...
But I am not convinced that is doable short of AGI (AI assistants that improve productivity of humans in that task, sure, but that expands the scope for economically viable automation projects rather than eliminating automators.)
At some point we will be "replaced". When you get AI to be able to navigate all user interfaces, communicate with other agents, plan long term and execute short term, we will no longer be the main drivers of economical growth.
At some point AI will become as powerful as companies.
And then AI will be able to sustain positive feedback loop of creating more powerful company like ecosystems that will create even more powerful ecosystems. This process will be fundamentally limited by available power and the sun can provide a lot of power. Eventually AI will be able to support space economy and then the only limit will be the universe.
Literally everyone on this website is in denial. They all approach it by asking which fields will be safe. No field is safe. “But it’s not going to happen for a long time.” Climate deniers say the same thing and you think they should be wearing the dunce hat? The average person complains bitterly about climate deniers who say that it’s “my grandkids problem lol” but when I corner the average person into admitting AI is a problem the universal response is that it’s a long way off. And that’s not even true! The drooling idiots are willing to tear down billionaires and governments and any institution whatsoever in order to protect economic equality and a high standard of living — they would destroy entire industries like a rampaging stampede of belligerent buffalos if it meant reducing carbon emissions a little but when it comes to the biggest threat to human well-being in history, there they are in the corner hitting themselves on their helmeted head with an inflatable hammer. Fucking. Brilliant.
I don't think anyone is in denial about this, it's just not something anyone should concern themselves with in the foreseeable future. AI that can replace a dev or designer is nowhere close to becoming a reality. Just because we have some cool demos that show some impressive capabilities in a narrow application does not mean we can extrapolate that capability to something that is many times more complex.
I agree. It bears repeating: Where modern AI shines is where it does not matter to be precise, where programming absolutely _depends_ on being precise.
So, today some good AI applications are face detection, fingerprint detection, or generating art. Where you need to catch or generate the general gist of it without pixel precision.
Of course, programming might be under greater threat than we imagine. I can also not claim that anyone holding that position is just plain _wrong_. But I do believe that would take an AI breakthrough that is yet to happen. That breakthrough would also have absolutely crazy consequences beyond programming, because now we would have "exact AI" and the thought of that boggles my mind for sure.
I strongly and emphatically disagree. You frame it like we invented these AIs. Did we write the algorithms that actually run when it’s producing its output? Of course not, we can’t understand them let alone write them. We just sift around until we find them. So obviously the situations lends its self to surprises. Every other year we get surprised by things that all the “experts” said was 50 years off or impossible, have you forgotten already?
This comment settles it for me. You’re thoroughly way too hyperbolic in your assessment. If this was closer to reality you’d have been able to state your case in clear, realistic terms. That’s something no one has been able to do so far.
I do deny it. Automation does not destroy jobs even if you're impressed at how good it is at painting; see "Luddite fallacy" and "lump of labor".
Claiming AIs are going to take over or destroy the world has been a basis of "AI safety" research since the 90s, but that isn't real research, it's a new religion run by Berkeley rationalists who read too many SF novels.
The assumption that automation creates (or at least does not destroy) jobs is an extrapolation from the past despite the fact that the nature of automation is constantly changing/evolving.
Also, one thing that everyone seems to ignore is that even if the number of jobs are not reduced, the skill/talent level for doing those jobs may (actually DO) increase and also, switching careers does not work for everyone. So you'll inevitably have people without a job even if it's just that the job market is shifting.
But I argue that as automation reaches jobs with higher levels of sophistication, i.e. the jobs of more skilled workers, some people will simply be left out because of their talent won't be enough to do any job that has not been automated.
I'm trying to understand your point, because I think I agree with you, but it's covered in so much hyperbole and invective I'm having a hard time getting there. Can you scale it back a little and explain to me what you mean? Something like: AI is going to replace jobs at such scale that our current job-based economic system will collapse?
Most people get stuck where you are. The fastest way possible to explain it is that it will bring rapid and fundamental change. You could say jobs or terminators but focusing on the specifics is a red herring. It will change everything and the probability of a good outcome is minuscule. It’s playing Russian roulette with the whole world except rather that 1/6 for the good, it’s one in trillions for the bad. The worst and stupidest thing we have ever done.
Just know it. Really think deeply about this important issue and try to understand it thoroughly so that you have a chance at converting others. Awareness precedes any preventative initiatives.
Algorithm space is large and guess-checking through it takes a lot of effort even when it’s automated like now. It requires huge amounts of compute. And meaningful progress requires the combined effort of the entire worlds intellectual and compute resources. It sounds implausible at first but this machine learning ecosystem is in fact subject to sanctions. There are extreme but plausible ways of reducing the stream of progress to a trickle. It just requires people to actually wake up to what’s happening.
I agree that many of us are not seeing the writing on the wall. It does give me some hope that folks like Andrew Yang are starting to pop up, spreading awareness about, and proposing solutions to the challenges we are soon to face.
Ignorance is bliss in this case, because this is even more unstoppable than climate change.
You thought climate change is hard to hold up?
Try holding up the invention of AI.
The whole world is going to have to change and some form of socialism/UBI will have to be accepted, however unpalatable.
I mean not really, even a layman non-artist can take a look at a generated picture from DALLE and determine if it meets some set of criteria from their clients.
But the reverse is not true, they won't be able to properly vet a piece of code generated by an AI since that will require technical expertise. (You could argue if the piece of code produced the requisite set of output that they would have some marginal level of confidence but they would never really know for sure without being able to understand the actual code)