Hacker News new | past | comments | ask | show | jobs | submit login
Thoughts on the Future of Software Development (sheshbabu.com)
257 points by rkwz 11 months ago | hide | past | favorite | 420 comments



> In summary, I believe there would still be a market for Software Developers in the foreseeable future, though the nature of work will change

This is precisely what I dread. When it comes to software development specifically, the parts that the AI cheerleaders are excited about AI doing are exactly the parts of the job that I find appealing. If I wanted to be a glorified systems integrator, I would have been doing that job already. The parts that the author is saying will still exist are the parts I put up with in order to do the enjoyable and satisfying work.

So this essay, if it's correct, explains the way that AI threatens my career. Perhaps there is no role for me in the software development world anymore. I'm not saying that's bad in the big picture, just that it's bad for me. It increasingly appears that I've chosen the wrong profession.


This resonates strongly with me. I don't want to describe the painting, I want to paint it. If this is indeed where we end up, I don't know that I'll change professions (I'm 30+ years into it), but the joy will be gone. It will truly become "just a job".


I remember back in the 80s I had friends who enjoyed coding in assembly and felt that using higher-level languages was "cheating" - isn't this just a continuation of that?


Yeah, that's a good way of looking at it. We gradually remove technical constraints and move to a higher level of abstraction, much closer to the level of the user and the business rather than the individual machine. But what's the endpoint of this? There will probably always be a need for expert-level troubleshooters and optimizers who understand all the layers, but for the rest of us, I'm wondering if the job wouldn't generally become more product management than engineering.


I'm not sure that there is an endpoint, only a continuation of the transitions we've always been making.

What we've seen as we transitioned to higher and higher level languages (e.g., machine code → macro assembly → C → Java → Python) on unimaginably more powerful machines (and clusters of machines) is that we took on more complex applications and got much more work done faster. The complexity we manage shifts from the language and optimizing for machine constraints (speed, memory, etc.) to the application domain and optimizing for broader constraints (profit, user happiness, etc.).

I think LLMs also revive hope that natural languages (e.g., English) are the future of software development (COBOL's dream finally be realized!). But a core problem with that has always been that natural languages are too ambiguous. To the extent we're just writing prompts and the models are the implementers, I suspect we'll come up with more precise "prompt languages". At that point, it's just the next generation of even higher level languages.

So, I think you're right that we'll spend more of our time thinking like product managers. But also more of our time thinking about higher level, hard, technical problems (e.g., how do we use math to build a system that dynamically optimizes itself for whatever metric we care about?). I don't think these are new trends, but continuing (maybe accelerating?) ones.


I don't think COBOL's dream was to generate enormous amounts of assembly code that users would then have to maintain (in assembly!) and producing differently wrong results every time you ran it.


It may not have been the dream, but the reality is many COBOL systems have been binary-patched to fix issues so many times that the original source may not be a useful guide to how the thing actually works.


Can you share any more info on this?


> But also more of our time thinking about higher level, hard, technical problems (e.g., how do we use math to build a system that dynamically optimizes itself for whatever metric we care about?).

It’s likely that a near-future AI system can suggest suitable math and implement it in an algorithm for the problem the user wants solved. An expert who understands it might be able to critique and ask for a better solution, but many users could be satisfied with it.

Professionals who can deliver added value are those who understand the user better than the user themselves.


This kind of optimization is what I did for the last few years of my career, so I might be biased / limited in my thinking about what AI is capable of. But a lot of this area is still being figured out by humans, and there are a lot of tradeoffs between the math/software/business sides that limits what we can do. I'm not sure many business decision makers would give free rein to AI (they don't give it to engineers today). And I don't think we're close to AI ensuring a principled approach to the application of mathematical concepts.

When these optimization systems (I'm referring to mathematical optimization here) are unleashed, they will crush many metrics that are not a part of their objective function and/or constraints. Want to optimize this quarter's revenue and don't have time to put in a constraint around user happiness? Revenue might be awesome this quarter, but gone in a year because the users are gone.

The system I worked on kept our company in business through the pandemic by automatically adapting to frequently changing market conditions. But we had to quickly add constraints (within hours of the first US stay-at-home orders) to prevent gouging our customers. We had gouging prevention in before, but it suddenly changed in both shape and magnitude - increasing prices significantly in certain areas and making them free in others.

AI is trained on the past, but there was no precedent for such a system in a pandemic. Or in this decade's wars, or under new regulations, etc. What we call AI today does not use reason. So it's left to humans to figure out how to adapt in new situations. But if AI is creating a black-box optimization system, the human operators will not know what to do or how to do it. And if the system isn't constructed in a mathematically sound way, it won't even be possible to constrain it without significant negative implications.

Gains from such systems are also heavily resistant to measurement, which we need to do if we want to know if they are breaking our business. This is because such systems typically involve feedback loops that invalidate the assumption of independence between cohorts in A/B tests. That means advanced experiment designs must be found that are often custom for every use case. So, maybe in addition to thinking more like product managers, engineers will need to be thinking more like data scientists.

This is all just in the area where I have some expertise. I imagine there are many other such areas. Some of which we haven't even found yet because we've been stuck doing the drudgery that AI can actually help with. [cue the song Code Monkey]


> AI is trained on the past.

Yes, unless models are being live fine-tuned, but generally yes.

>What we call AI today does not use reason

I don’t think this is correct- I think it’s more accurate to say it reasons on its priors rather than from first principles.

For the most part, I agree with the rest of your post.


> machine code → macro assembly → C → Java → Python

This made me laugh out loud. Python is not a step up from Java in my opinion. Python is more of a step up from BASIC. It's a different evolutionary path. Like LISP.


> machine code → macro assembly → C → Java → Python

The increase in productivity, we can all agree on, but a non-negligible portion of HN users would say that each one of those new languages made programming progressively less fun.


I think where people will disagree is how much productivity those steps brought.

For instance I think the step from machine code to macro assembler is bigger than the step from a macro assembler to C (although still substantial), but the step from C to anything higher level is essentially negligible compared to the massive jump from machine code to a 'low level high level' language like C.


So many other things happened at the same too, so it's sometimes hard to untangle what is what.

For instance, say that C had namespaces, and a solid package system with a global repo of packages like Python, C# and Java have.

Then you'd be able to throw together things pretty easily.

Things easily cobbled together with Python often aren't attributable to Python the language per se, but rather Python, the language and its neat packages.


Python is a step backwards in productivity for me compared with typed languages. So no I don't think we all agree on this. You might be more productive in Python but that's you not me.


The endpoint is that being a programmer becomes as obsolete as being a human "calculator" for a career.

Millions, perhaps billions of times more lines of code will be written, and automated programming will be taken for granted as just how computers work.

Painstakingly writing static source code will be seen the same way as we see doing hundreds of pages of tedious calculations using paper, pencil, and a slide rule. Why would you do that, when the computer can design and develop such a program hundreds of times in the blink of an eye to arrive at the optimal human interface for your particular needs at the moment?

It'll be a tremendous boon in every other technical field, such as science and engineering. It'll also make computers so much more useful and accessible for regular people. However, programming as we know it will fade into irrelevance.

This change might take 50 years, but that's where I believe we're headed.


Yet, we still have programmers writing assembly code and hand-optimizing it. I believe that for most software engineers, this will be the future. However, experts and hobbyists will still experiment with different ways of doing things, just like people experiment with different ways of creating chairs.

An AI can only do what it is taught to do. Sure, it can offer unique insights from time to time, but I doubt it will get to the point where it can craft entirely new paradigms and ways of building software.


You might be underestimating the potential of an automated evolutionary programming system at discovering novel and surprising ways to do computation—ways that no human would ever invent. Humans may have a better distribution of entropy generation (i.e. life experience as an embodied human being), but compared to the rate at which a computer can iterate, I don't think that advantage will be maintained.

(Humans will still have to set the goals and objectives, unless we unleash an ASI and render even that moot.)


AI, even in its current form can provide some interesting results. I wouldn’t underestimate an AI, but I think you might be underestimating the ingenuity of a bored human.


Humans aren't bored any more [0]. In the past the US the US had 250 million people who were bored. Today it has far more than than scrolling through instagram and tiktok, responding to reddit and hacker news, and generally not having time to be bored

Maybe we'll start to evolve as a species to avoid that, but AI will be used to ensure we don't, optimising far faster than we can evolve to keep our attention

[0] https://bigthink.com/neuropsych/social-media-profound-boredo...


I’m bored all the time and I’m a human. Last I checked anyway.


I agree, it's definitely still possible to get bored.

If I stop making progress on my personal projects, sinking my free time into games or online interaction is very unsatisfying.


Perhaps, but evolutionary results are difficult to test. They tend to fail in bizarre, unpredictable ways in production. That may be good enough for some use cases but I think it will never be very applicable to mission critical or safety critical domains.

Of course, code written by human programmers on the lower end of the skill spectrum sometimes has similar problems...


It doesn't seem like a completely different thing to generate specifications and formally verified programs for those specifications (though I'm not familiar with how those are done today).


I mean, I don’t even like programming with Spring because what all of those annotations are doing is horribly opaque. Let alone mountains of AI generated code doing God knows what.

I mean Ken Thompson put a back door into the C compiler no one ever found. Can you imagine what an AI could be capable of?


ASI?


Artificial Super-Intelligence


It will equally eliminate the need for all scientists and engineers. And every other human occupation.


I don't believe that's going to happen. If it were, humans would have stopped playing chess. But not only do lots of people still play chess, people making a living playing chess. There are YT channels devoted to chess. The same thing will be true of almost all sports, lots of entertainment, and lots of occupations where people prefer human interaction. Bar tenders and servers could be automated away, but plenty like to sit at a bar or table and be served by someone they can talk to. I have a hard time seeing nurses being replaced. Are people going to want the majority of their care automated?

I also don't know what it means to completely remove humans from all work. Who is deciding what we want done? What we want to investigate or build? The machines are just gong to make all work-related decisions for us? I don't believe that. It would cease being our society at that point.

Which brings up the heart of the matter. Why are we trying to replace ourselves? It's our civilization, automation are just tools we use to be more productive. It should make our lives better, not remove us from the equation.

My guess is the real answer is it will make some people obscenely rich, and give some governments a significant technical advantage over others.


Chess is not something you make new discoveries in anymore, not something that results in a product that people pay for. Poor analogy.


Ahaha, no. Every time period has its own distinct style. you can tell the difference between Magnus, Kasparov, Capablanca etc. Lots of innovation in chess in fact, almost uninfluenced by machines.


“It will cease being our society” is the most likely outcome. Current politics demonstrates we have lost the ability to collaborate for our common good. So the processes accelerating AI capabilities will be largely unchecked until it’s too late and the AIs will optimize whatever inscrutable function they have evolved to prioritize.


> The endpoint is that being a programmer becomes as obsolete as being a human "calculator" for a career.

Yeah, the same time the singularity happens, and then your smallest problem will be eons bigger than your job.

But LLMs can’t solve a sudoku, so I wouldn’t be too afraid.


They are pretty close. LLMs can write the code the solve a sudoku, or leverage an existing solver, and execute the code. Agent frameworks are going to push the boundaries here over the next few years.


> LLMs can write the code the solve a sudoku

It’s literally part of its training data. The same way it knows how to solve leetcode, etc.


>There will probably always be a need for expert-level troubleshooters and optimizers who understand all the layers

There's already so many layers that essentially no one knows them all at even a basic level, let alone expert. A few more layers and no one in the field will even know of all the layers.


Is a more generic version of this argument be that there will always be a need for smart/experienced people?


Seems so. Those friends did have to contend with the enjoyable part of their job disappearing. Whether they called it cheating or not is doesn't diminish their loss.


It didn't; there are still many roles for skilled assembly programmers in performance-critical or embedded systems. It's just their market share in the overall world of programming has decreased due to high-level programming languages; although better technology has increased the size of the market that might have demands for assembly.


I am not skilled in these areas so I am very scared. I am going to go back to school to get a nursing degree because it is guaranteed to not be disrupted by the disrupters like now where the disrupters are disrupting themselves. Despite the personal risks of a healthcare job, it will bring me so much more peace of mind.


I'm afraid it's naive to think that nursing is not going to get disrupted by AI. Seems like robotics is going to massively impact medical caregiving in the near future.


> robotics is going to massively impact medical caregiving in the near future

Not in the near near future. Do you know anything about nursing? The field will require some hard changes for robots to replace nurses, and the robots will need licenses


Even without robotics, many jobs like nursing (or construction) that require training will be able to be accomplished with much less training + a live computer coach that can give context-specific directions.


This is how we get to Idiocracy. Everyone is now relying on AI and become stupid because of it.


Definitely a risk. There's also an upside to having a 1-on-1 tutor with limitless patience and knowledge.


They could move into compilers or VMs or low level performance profiling, where those skills are still very important.


I think it's a fundamentally different thing, because AI is a leaky abstraction. I know how to write c code but I actually don't know how to write assembly at all. I don't really need to know about assembly to do my job. On the other hand, if I need to inspect the output of the AI to know that it worked, I still need to have a strong understanding of the underlying thing it's generating. That is fundamentally not true of deterministic tools like compilers.


Boiler plate being eliminated by syntactic sugar or runtime is not the same thing. Sure that made diving in easier but it didn't abstract away the logic design part - the actual programming part. Now the AI spits out code for you without thinking about the logic.


The difference is that your friend has a negative view of others than the OP is not presenting. They’re just stating their subjective enjoyment of an activity.


Not commenting on the mindset of earlier programmers, rather the analogy you offer: language level abstraction is entirely unlike process specialization.

For example, when moving up to C from assembler, the task of the "programmer" remains invariant and the language tool affords broader accessibility to the profession since not everyone likes to flip bytes. There is no subdivision of overall task of "coding a software product".

With AI coders, there is task specialization, and, as pointed out, what's left on the table is the least appetizing of the software tasks: being a patch monkey.

This is the issue.


David Parnas has a great take on this:

"Automatic programming always has been a euphemism for programming with a higher level language than was then available to the programmer. Research in automatic programming is simply research in the implementation of higher-level languages.

Of course automatic programming is feasible. We have known for years that we can implement higher-level programming languages. The only real question was the efficiency of the resulting programs. Usually, if the input 'specification' is not a description of an algorithm, the resulting program is woefully inefficient. I do not believe that the use of nonalgorithmic specifications as a programming language will prove practical for systems with limited computer capacity and hard real-time deadlines. When the input specification is a description of an algorithm, writing the specification is really writing a program. There will be no substantial change from out present capacity.

The use of improved languages has led to a reduction in the amount of detail that a programmer must handle and hence to an improvement in reliability. However, extant programming languages, while far from perfect, are not that bad. Unless we move to nonalgorithmic specifications as an input to those systems, I do not expect a drastic improvement to result from this research.

On the other hand, our experience in writing nonalgorithmic specifications has shown that people make mistakes in writing them just as they do in writing algorithms."

Programming with AI, so far, tries to specify something precise, algorithms, in a less precise language than what we have.

If AI programming can find a better way to express the problems we're trying to solve, then yes, it could work. It would become a matter of "how well the compiler works". The current proposals, with AI and prompting, is to use natural language as the notation. That's not better than what we have.

It's the difference between Euclid and modern notation, with AI programming being like Euclidean notation and current programming languages being the modern notation:

"if a first magnitude and a third are equal multiples of a second and a fourth, and a fifth and a sixth are equal multiples of the second and fourth, then the first magnitude and fifth, being added together, and the third and sixth, being added together, will also be equal multiples of the second and the fourth, respectively."

a(x + y) = ax + by

You can't make something simpler by making it more complex.

https://web.stanford.edu/class/cs99r/readings/parnas1.pdf


I do think it’s basically the same. Its further on the same continuum of: Natural language/Machine code.


I don't really think it's a continuum. There is a continuum of abstraction among programming languages, from machine code to Java/Python/Haskell or whatever, but natural language is fundamentally different: it's ambiguous, ill-defined. Even if LLMs generate a lot of our code in the future, somebody is going to have to understand it, verify its correctness, and maintain it.


Natural language, python, c, assembly

The distance isn’t the same between them, but each one is more abstracted than the next.

Natural language can be ambiguous and ill defined. Because the compiler is smarter. Just like you don’t have to manage memory in Python, except it abstracts a lot more.

The fact is that this very instant you can compile from natural language.


There is a vast gulf between natural language and the other 3, which are fundamentally very similar to each other.


LLMs can generate code, but they still need to be prompted correctly, which requires someone who knows how to program beyond toy examples, since the code is going to have to be tested and integrated into running code. The person will need to understand what kind of code they're trying to generate, and whether that meets the business requirements.

Python is closer to C (third generation programming language). Excel is a higher level example. It still takes someone who knows how to use Excel to do anything meaningful.


Great point.

I think this will weed out the people doing tech purely for the sake of tech and will bring more creative minds who see the technology as a tool to achieve a goal.


Indeed, can't wait for the day when technical people can stop relishing in the moments of intimate problem solving between stamping out widgets, and instead spend all day constantly stamping out widgets while thinking about the incredible bullshit they'll be producing for pennies. Thanks boss!


It feels that people commenting on this post are forgetting that tools have evolved since the times of punch cards or writing only in pure assembly.

I personally wouldn't have enjoyed being that kind of programmer as it was a tedious and very slow process, where the creativity of the developer was rather low as the complexities of development would not allow for just anyone to be part of it (my own assumption).

Today we have IDEs, autocomplete, quick visual feedback (inspectors, advanced debuggers, etc.) which allow people who enjoy creating and seeing the results of their work as opposed to purely be typing code for someone else.

So, I don't get why people jump straight to thinking that adding yet another efficiency tool would destroy everything. To me it seems to make developing simpler applications something which doesn't require a computer science degree, that's all.


> I personally wouldn't have enjoyed being that kind of programmer as it was a tedious and very slow process, where the creativity of the developer was rather low as the complexities of development would not allow for just anyone to be part of it (my own assumption).

I think your assumption is incorrect. I remember programming using punched cards and low-level languages, and the amount of creativity involved was no less than is involved now.


That’s like saying Shakespeare couldn’t be productive as a writer because he didn’t have a word processor.


The few people at the rightmost edge of the bell curve shouldn't be used as an example in this case

The average attorney became much more productive after the introduction of the word processor.


So are you saying that you would rather live in a society where only lucky people could participate in a given field than making it accessible to more people?


Except that's what low code is today. You'll have to describe it in such detail that you might as well as paint it yourself.

Maybe it will abstract away setting up the paint and brush and the canvas, that part I'm fine with though.


From the perspective of the programmer, true. Not necessarily from the perspective of the manager/customer, who can say in broad terms what needs to be done, and the programmer-black-box spits something out.


The manager/customer is going to be very disappointed that the computer can't just "do what I ask it to".


Agreed. But isn't this what is happening over a period for all manual jobs? I mean people used to carve wood. Now, machines do that with more precision & speed. The same goes for laying roads, construction & other professions.

All niche jobs will become mundane chores. I don't know if it is good or bad. Because humans always find a way to cultivate something new.


Only until machines are better at everything humans can do.


Yeah, but those were boring jobs. Programming is fun.

I'm only half joking.


Funny you should phrase it this way. I know you mean prompts as description, but I would currently prefer declaring/describing what I want in a higher-level functional way rather than doing all the stateful nitty-gritty iterations to get it done. Some folks want to do manual memory management, or even work with a borrow checker, I'm good for most purposes with gc.

The question is always what's your 'description' language and what's your 'painting' language? I see the same in music: DJ's mix and apply effects to pre-recorded tracks, others resample on-the fly, while some produce new music from samples, and others form a collage from generated soundscapes, etc. It's all shades of gray.


Call me a cynic (many have, especially on this topic) but I can't help but think that the majority of what AI will "successfully" replace in terms of craftsmanship is going to be stuff that would've never been produced the "correct" way if you will. It's going to be code created for and to suit the interests of the business major class. Just like AI art isn't really suitable for anything above hobby fun stuff like generating your D&D character's avatar, or product packaging stock photo junk or header images for LinkedIn blog posts. Anything that's actually important is going to still need to be designed, and that goes for creative work like design, and proper code-work for development too, IMO.

Like sure, these AI's can generate code that works. Can they generate replacement code when you need to change how something works? Can they troubleshoot code that isn't doing what it's meant to? And if you can generate the code you want but then need to tweak it after to suit your purpose, is that... really that much faster than just writing the thing in your style, in a way you understand, that you can then change later as required?

I dunno. I've played with these tools and they're neat, and I think they can be good for learning a new language or framework, but once I'm actually ready to build something, I don't see myself starting with AI generation for any substantial part of it.


The question is not about what AI can do today but what we assume AI will be able to do tomorrow.

All of what you wrote in your second paragraph will become something AI will be doing better and faster than you.

We never had technology which can write code like this. I prompted ChatGPT to write a very basic java tool which renders an image from an url and makes it bigger on a click. It just did it.

Its not hard to think further and a lot of technology is already going into this direction. Alone last week devin was showne. Gemini has a window token of 1 Million tokens. Groq shows us how it will feel to have instant response.

Right now its already good enough that people with Copilot like to keep it when asked. We already now pay billions for AI daily. This means the amount of research, business motivation and money flowing into it now is probably staggering in comparision to what moved this field a few years ago.

Its not clear at all how fast we will progress but i'm pretty sure, we will hit a time were every junior is worse than AI which will force people of rethinking what they are going to do. Do i hire an junior and train him/her? Or do i prefer to invest more into AI? The gap will widen and widen, a generation or a certain amount of people will stay longer and might be able to stay in development but a lot of others might just not.


> We never had technology which can write code like this. I prompted ChatGPT to write a very basic java tool which renders an image from an url and makes it bigger on a click. It just did it.

It's worth noting, that it can do things like that because of the large amount of "how to do simple things in java" tutorials there are on the internet.

Ask an AI to _make_ java, and it won't (and will continue to not) be able to.

That's the level that AI will fail at, when things aren't easily indexed from the internet and thus much harder / impossible to put into a training set.

I think the technology itself (transformers and other such statistical models) have exhausted most of their low hanging fruit by now.

Sora, for example, isn't a grand innovation in the way latent space models, word2vec, or transformers are, it's just a MUCH larger model than DALLE-3. which is great! but still has the limits inherit to statistical models. They need the training data.


> It's worth noting, that it can do things like that because of the large amount of "how to do simple things in java" tutorials there are on the internet.

Much like the same points made elsewhere with regard to AI art: It cannot invent. It can remix, recombine, etc. but no AI model we have now is anywhere close to where it could create something entirely new that's not been seen before.


I have not seen crabs made out of food before.

What level do you think 'invention' have to be to count as something AI can't do?

The only thing AI needs is a feedback loop and a benchmark/cost function.

If the cost function is page impressions, thats easy. If its running unit tests based from business requirements, thats easy too.


> The only thing AI needs is a feedback loop and a benchmark/cost function.

You’re forgetting about data. AI needs data. It’s arguably the most important thing any statistical model needs.

That data must come from somewhere.

There’s no free lunch.


> The question is not about what AI can do today but what we assume AI will be able to do tomorrow.

And I think many assumptions on this front are products of magical thinking that are discarding limitations of LLMs in favor of waiting for the intelligence to emerge from the machine, which isn't going to happen. ChatGPT and associated tech is cool, but it is, at the end of the day, pattern recognition and reproduction. That's it. It cannot invent something not before seen, or in our case here, it cannot write code that's never been written.

Now that doesn't make it useless, there's tons of code that's being written all the time that's been written thousands of times before. But it does mean depending what you're trying to build, you will run into it's limitations pretty quickly and have to start writing it yourself. And that being the case... why not just do that in the first place?

> We never had technology which can write code like this. I prompted ChatGPT to write a very basic java tool which renders an image from an url and makes it bigger on a click. It just did it.

Which it did, because as the other comment said, tons of people already have.

> Its not clear at all how fast we will progress but i'm pretty sure, we will hit a time were every junior is worse than AI which will force people of rethinking what they are going to do. Do i hire an junior and train him/her? Or do i prefer to invest more into AI? The gap will widen and widen, a generation or a certain amount of people will stay longer and might be able to stay in development but a lot of others might just not.

I mean, this sounds like an absolute crisis in the making for software dev as a profession, when the entire industry is reliant on a small community of actual programmers overseeing tons of robot junior devs turning out mediocre code. But to each their own I suppose.


Most of the time i'm not 'inventing' anything new too.

I get a requirement, find a solution and the solution is 99,99999% not a new algorithm. I actually believe i never invented a new algorithm.

Besides the next step is reasoning in GPT-5 and devin shows that GPTs/LLMs can start breaking down tasks.

I don't mind being wrong tbh, there is no risk in it for me if AI will not take my job but i don't believe it. I do believve the progress will be better and better and AI will do more and more reasoning.

It can easily try and do things 1000x fater than us, including reasoning. Its not hard to see that it will also be able to create its own examples and learn from them.


> I get a requirement, find a solution and the solution is 99,99999% not a new algorithm. I actually believe i never invented a new algorithm.

I can think of tons of things I do in my day-to-day programming that, while certainly not new or remarkable advances in technology, are at least new enough that you're not going to find a Stack Overflow thread for it.

Again, you guys are pointing to a code generator that can generate functions or code snippets to accomplish a particular task, and again, that is cool and I think it has a huge usage if nothing else as an assistive learning tool when you're trying to pick up a new language or get better with a library or what have you. But again, my point is, ask it to do something that doesn't appear in a bunch of those threads. Ask it to solve a particular bugbear problem in your codebase. Ask it to invent a new language, even a high level one.

> It can easily try and do things 1000x fater than us, including reasoning

AI is not a reasoning machine, though. I'd be very interested in what you mean by the word "reasoning" in this context.


"I can think of tons of things I do in my day-to-day programming that, while certainly not new or remarkable advances in technology, are at least new enough that you're not going to find a Stack Overflow thread for it."

I don't. I might solve current issues, new error messages from new/other libraries etc. but not something novel new.

"reasoning": thinking about a problem, reasoning about potential solutions, estimating the best action, executing it, retrying.

Reasoning in sense of, if the error message indicates a hibernate issue, reducing the search space for solution finding.


In my workplace juniors were replaced years ago by a never ending round of offshore. As soon as we train our offshore, they are rotated somewhere else.

At least ai will stay put.

But most of us will be getting by on basic income. Or banging on the gates of robo guarded walls begging for food.


Why invent a new lang when it's easier to compile down to machine code and grok that?


It could be that AI is smart enough for this. Nonetheless a language is a compression thing. You rarly use single ASM instructions.


I think the question is whether we're going to plateau at 95% or not. It's possible that we just run into a wall with transformers, or they do iron it out and it does replace us all.


Also if there are fewer humans involved in the code production there is a lot of room for producing code that "works", but is not cohesive or maintainable. Invariably there will be a point at which something is broken and someone will need to wade through the mess to find why it's broken and try to fix it.


This is the future imagined by A Fire Upon the Deep and its sequel. While less focused on the code being generated by ai, it features seemingly endless amounts of code and programs that can do almost anything but the difficulty is finding the program that works for you and is safe to use.

To some extent... This is already the world we live in. A lot of code is unreadable without a lot of effort or expertise. If all code was open sourced there would almost certainly be code written to do just about anything you'd like it to. The difficulty would be finding that code and customizing it for your use.


To piggyback off the sci-fi talk, I imagine in the far future, the programmer will become some sort of literal interface between machines and humans.

I imagine some sort of segregation would happen where the "machine cities" would be somewhat removed from the general human populus. This would be to ensure the machines could use whatever information transport system they desired, unencumbered by the needs of the human populous, and vice-versa.

At a certain level of compute, I prognosticate that a certain level of logistical optimization would be trivial to advanced intelligences, and could be accomplished with almost-literally no effort using left-over cycles from whatever big calculation they were doing.

This would start to define different roles for humanity and machine. With logistics essentially "solved," a programmer would be a human-machine interpreter, sometimes journeying to the machine cities to disceminate needs of the people, or define a good way to introduce new technology to the populous.

This could look something like: During a headlining musical act, a "programmer," recently-returned from the machine city, grabs a mic and says "Does anyone want some of this BLUE, GLOWING, NON-RADIOACTIVE SELTZER WATER?" At which point the crowd would go wild. "If you liked that, just wait until you see what's coming next week!"

So essentially the programmer role becomes a hype-man for new, emergent technologies.


Thanks for the Book Title. It looks like an interesting read.


Caution - lots of people like to talk about this "code archeology" idea as if it's a central driving point of the book, whereas in fact it's mentioned once in passing in the prologue and is never again relevant to the story.

Don't get me wrong, it's still a decent book on its own merits - but don't go into it expecting that to be the main point of the book (I did, and disappointed as a result).


I'd argue that while its not a core diving part narrative... It is central to the idea of the book and its sequel. It's a decent sized book with a lot of ideas and the idea of code archeology and the repercussions of it are what the book is about as much as any of the other main ideas.

But yes, if you want a book that focused only on that... This is going to disappoint.


> It is central to the idea of the book and its sequel. [..] the idea of code archeology and the repercussions of it are what the book is about as much as any of the other main ideas.

Can't speak to the sequel as I gave up on the series after that, but it's _really_ not relevant to the plot or ideas of the first book at all. All that matters for the plot is that a hostile, powerful, uncontrollable AI arises. In the book, it _happens_ to be because of a code archeologist "delving too greedily and too deep"; but the plot would not be changed one iota if it had simply arisen (and gone off the rails) as a product of general AI development.


I'd argue that while its not a core diving part narrative... It is central to the idea of the book and its sequel. It's a decent sized book with a lot of ideas and the idea of code archeology and the repercussions of it are what the book is about as much as any of the other main ideas.

But yes, if you want a book that focused only on that... This is going to disapoint.


As a counterpoint, the main nemesis of the book comes from software that is found in archaeological expedition. While software archeology doesn't show up after the first chapter, the ramifications of what happens in that world due to so much software is pretty central.


This is certainly true, and doesn't detract from my disappointment not to have actually seen the software archeology in practice.


No problem. I've been a sci fi reader my entire life and was shocked I hadnt stumbled across Vinge earlier. The sequel/prequel to Fire Upon the Deep, called A Deepness in the Sky, is arguably even better and the same idea of tech/code being used and customized far after its written is even more central to the plot.

Two of my favorite reads of the last few years, so I highly recommend them.

Futher... After some digging it looks like there is an old slashdot discussion on the same topic: https://slashdot.org/story/06/11/04/0622246/no-more-coding-f...

Likely some spoilers for the books in there so may be worth holding off until after you've read them if you intend to.


Certainly many of us here already have a good amount of experience debugging giant legacy spaghetti code-bases written by people you can't talk to, or people who can't debug their own code. That job may not change much.


I remember one such occasion back in a previous tech boom (late 90s) and it turned out the reason I couldn't talk to the guy who wrote this particular pile of Italian nutrition was that the Feds had shown up one day and taken him to jail (something to do with pump and dump market manipulation via a faked analyst report [edit: actually a faked press release I now remember. "SmallCapCorp (NASDAQ: SCC$) announces they have received a record breaking order for their next gen product / aquisition offer / something like that from RandomIsraeliCompanyThatMightNotEvenHaveExisted"]).

A lot of software engineers would spend a portion of their day tracking their volatile stock / options etc. in those years.


Nah, you just throw it out and have the AI generate an all new one with different problems!


I really look forward to all programs now having new strange bugs every release. They already do, but I expect AI to do that more at first.


The hacking opportunities will be endless. Feeding AI the exploit will be new.


I don't know if AIs will ever get really good at QA in general, but I do think that AIs can get quite good quickly at regression testing.


That's how bads use GPT to code. The right way is to ask GPT to break the problem down into a bunch of small strongly typed helper functions with unit tests, then ask it to compose the solution from those helper functions, also with integration tests. If tests fail at any point you can just feed the failure output along with the test and helper function code back in and it will almost always get it right for reasonably non-trivial things by the second try. It can also be good to provide some example helper functions/tests to give it style guidelines.


If you're already doing all of this work then it's trivial to actually type all the stuff in yourself

Is GPT actually saving you any time if it can't actually do the hard part?


It's not really "all this work," once you have good prompts you can use them to crank out a lot of code very quickly. You can use it to crank out thousands of lines of code a day that are somewhat formulaic, but not so formulaic that a simple rules based system could do it.

For example, I took text document with headers for table names and unordered lists for table columns, and had it produce a database schema which only required minor tuning, which I then used to generate sqlmodel classes and typescript types. Then I created an example component for one entity and it created similar components for the others in the schema. LLMS are exceptionally good at this sort of domain transformation, a decent engineer could easily crank out 2-5k lines/day if they were mostly doing this sort of work.


Now your description of "good prompts" to reuse has created an abomination in my mind. I blame you.

The abomination: prompts being reused by way of yaml templating, a Helm chart of sorts but for LLM prompts. The delicious combination of yaml programming and prompt engineering. I hope it never exists.


You know with GPT you can do these steps in a language you are not familiar with and it will still work. If you don't know some aspect of the language or it's environmental specifics you can just chat until you find out enough to continue.


How do I know if a problem needs to be broken down by GPT, and how do I know if it broke the problem down correctly? What if GPT is broken or has a billing error, how do I break down the problem then?


1. Intuition built by trial and error 2. Domain expertise backed by automated checks 3. The old fashioned way, and if your power is out you can even bust out a slide rule


Maybe I'm being overly optimistic but in a future where a model can digest hundreds of thousands of lines of code, write unit tests, and do refactors, will this even be a problem?


I'm the opposite. I enjoy engineering and understanding systems. Manually coding has been a necessary to build systems up until now. AWS similarly was great because it provided a functional abstraction over the details of the data center.

On a personal level I feel bad for the people who enjoyed wiring up small data centers or enjoyed writing GitHub comments about which lint rules were the best. But I'm glad those are no longer necessary.


People still wire up data centres.


Being a developer, I heartily agree with you.

Being a human, I realise that I as a developer have put a lot of people out of a job. Those folks have had to adapt to that change.

I guess now it's our time to adapt to change. At least it keeps me on my feet!


> I realise that I as a developer have put a lot of people out of a job.

For most developers this will not be true. Most apps, websites, compilers, desktop software etc. will not have put anyone out of a job. I certainly never ever put someone out of a job. I made some peoples live easier, but their total working hours didn't shorten and they certainly did not change profession or were replaced. In fact the majority of tasks that my software was applied to would simply have been deemed impossible to do and not have been done and that would have been all there was to it.


Having once worked on a project to improve some customer care software, I know that as a direct result of those improvements, people got fired.

I'm sure they all found new jobs but it did make me think about the consequence of my work.

Other projects involved making freemium games more addictive to suck people into paying. Of course everyone has a choice but playing on people's addictions to make money is an questionable morality.


To my philosophy, the goal of technology should be _precisely_ to get rid of as many jobs as possible.

Unfortunately technology benefits only the 1% in our current society, and job loss is a bad thing (it should not be!)


> I guess now it's our time to adapt to change.

I'm just saddened by the prospect that, for me, "adapting to change" would mean "no longer being able to make a living doing what I actually enjoy". that's why if this is the future, it's a career-killing one for me. Whether or not I stay in the industry, there is no future in my chosen career path, and the alternative paths that people keep bringing up all sound pretty terrible to me.

My only hope is that AI will not achieve the heights that its proponents are trying to reach (I suspect this is the case). I see no other good outcome for me.


If AI does achieve the hyped heights then we're all out of a job, regardless of what we do.

Many people suffer through bullshit jobs[1] so we are privileged to have - at least for a time - done what we really enjoy and got paid for it.

[1] David Graeber father of the term "bullshit jobs"


We're all sorry for the other guy when he loses his job to a machine. When it comes to your job, that's different. And it always will be different.

That was 1968. Plus ca change...


> I'm just saddened by the prospect that, for me, "adapting to change" would mean "no longer being able to make a living doing what I actually enjoy". that's why if this is the future, it's a career-killing one for me.

Ok and? You don’t think any of the others put out of their work by other forms of computing like you might’ve enjoyed their jobs? You don’t think it might have been career ending for them?


The catch is that those people could, barring the AI advances we seem to be seeing, could retrain for an SWE labor market that lagged demand; that wont even be possible for devs put out of work in the future.


Those people who did retrain are the same devs being put out of work - which means they got hit with the setback twice and are worse off than people who started off as devs and thus only got hit once.

Like the allied bomber pilots in WWII looking down below at the firestorm knowing that there is a good chance (~45%) that they too will join their fate only later.


I suspect this is the wrong take. AI can only perform integrations when there are systems to integrate. The frontier of interesting work to be done isn't supervising an integration AI, but building out the hard components that will be integrated. Integration work itself already has been moving up the stack to low-code type tools and power-user like people over the past decade even before LLMs become the new thing.


I understand your feelings but I do also wonder if its not similar to complaining about compilers or garbage collection. I'm sure there are people that love fiddling with assembly and memory management by hand. I assume there will be plenty of interesting/novel problems no matter the tooling because, fundamentally, software is about solving such problems.


Software engineering as an occupation grew because of static analysis and GCs (literally why the labor market is the size that it is as we speak); the opposite appears to be the outcome of AI advances.


The same happened with accountants and spreadsheet software, the number of accounting jobs grew. The actual work they performed became different. I think a similar thing is likely to happen in the software world.


Tech has already learned there’s not enough real frontier left to reap the bounty of(removing zero interest rates that incentivize mere flow of capital). This stuff is being invested in to yield the most productivity at the least cost. There will either be a permanent net decrease in demand or, being so high level, most openings will pay no more than 60-70K in an America (likely with reduced benefits) where wages are already largely stagnant.


I think there is definitely merit to your statements. I believe the future of the average software developer job involves a very high level language, API integration, basic full stack work with a lot of AI assistance. And those roles will mostly be at small to medium businesses who can't afford the salaries or benefits that the industry has standard in the US.

Almost every small business I know has an accountant or book keeper position which is just someone who had no formal education and the role is just managing QuickBooks. I don't think the need for formally educated accountants who can handle large corporate books decreased significantly, but I don't have any numbers to back that up. Just making the comparison to say I don't think the hard / cool stuff that a lot of software developers love doing is going away. But these are just my thoughts.


I'll take the other side of that bet.

It's reasonable to expect that sometime relatively soon, AI will be a clear-cut aid to developer productivity. At the moment, I consider it a wash. Chatbots don't clearly save me time, but they clearly save me effort, which is a more important resource to conserve.

Software is still heavily rate-limited by how much of it developers can write. Making it possible for them to write more will result in more software, rather than fewer developers. I've seen nothing from AI, either in production or on the horizon, that suggests that it will meaningfully lower the barrier to entry for practicing the profession, let alone enable non-developers to do the work developers do. It will make it easier for the inexperienced to do tasks which need a bit of scripting, which is good.


> Software is still heavily rate-limited by how much of it developers can write

Hmm. We have very different experiences here. IME, the vast majority of industry work is understanding, tweaking, and integrating existing software. There is very little "software writing" as a percentage of the total time developers spend doing their jobs across industry. That is the collective myth the industry uses to make the job seem more appealing and creative than it is.

At least, this is my experience in the large FAANG type companies. We already have so much code. Just figuring out what that code does and what else to do with it constitutes the majority of the work. There is a huge legibility issue where relatively simple things are obstructed by the morass of complexity many layers deep. A huge additional fraction of time is spent on deployments and monitoring. A very small fraction of the work is creatively developing new software. For example, one person will creatively develop the interface and overall design for a new cloud service. The vast majority of work after that point is spent on integration, monitoring, testing, releases, and so on.

The largest task of AI here would be understanding what is going on at both the technical layer and the fuzzy human layer on top. If it can only do #1, then knowledge workers will still spend a lot of effort doing #2 and figuring out how to turn insights from #1 into cashflow.


>At least, this is my experience in the large FAANG type companies. We already have so much code. Just figuring out what that code does and what else to do with it constitutes the majority of the work.

That sounds horrible. I've always sought out smaller companies that need stuff built. It certainly doesn't pay as much as SV companies but it's pretty stimulating. Sometimes being a big fish in a small pond is pretty nice.

IMO, maintaining someone else's code is probably the worst type of programming job there is, especially if it's bad /disjointed code. A lot of people can make a good living doing it though. It would be nice if AI could alleviate the pain of learning and figuring out a gnarly codebase.


Yep. It's not very satisfying, but that's the state of things. I think we should be more honest as an industry about that. Most of the content that prospective SWEs look at has a self-marketing slant that makes things look more interesting than they typically are. The reality is far more mundane. Or worse: micromanaging, pressure-driven, and abusive in many places.


> I've seen nothing from AI, either in production or on the horizon, that suggests that it will meaningfully lower the barrier to entry for practicing the profession, let alone enable non developers to do the work developers do.

Good observation. Come to think of it, all examples of AI coding require a competent human to hold the other end, or else it makes subtle errors.


How many humans do you need per project though? The number can only lower as AI tooling improves. And will employers pay the same rates when they’re already paying a sub for their AI tools and the work involved is so much more high level?


I don’t claim to have any particular prescience here, but doesn’t this assume that the scope of “software” remains static? The potential universe of programmatically implementable solutions is vast. Just so happens that many or most of those potential future verticals are not commercially viable in 2024.


Exactly. Custom software is currently very expensive. Making it cheaper to produce will presumably increase demand for it. Whether this results in more or fewer unemployed SWEs, and if I'll be one of them, I don't know.


> Making it possible for them to write more will result in more software, rather than fewer developers.

Goddamnit, software developers are already writing more software than we need. I wish they'd stop. Or redirect all that energy to new problems to solve. Instead we're seeing cloud-deployed microservice architecture CRUD apps that do what systems built for mainframes with kilobytes of RAM do, only worse. We're in a glut of bad software, do you think that AI accelerating production of more of the same will make things better?


If chatbots aren't saving you time you need to refine what you choose to use them for. They're absolutely amazing at refactoring, producing documentation, adding comments, translating structured text files from one format to another, implementing well known algorithms in newer/niche languages where repository versions might not exist, etc. On the other hand, I've mostly stopped asking GPT4 to write quickstart code for libraries that don't have star counts in the high thousands at least, and while I'll let it convert css/style objects/etc into tailwind I it's pretty bad at styling in general, though it is good at suggesting potentially problematic styles when debugging layout.


> you need to refine what you choose to use them for

This is making assumptions about the work I do which don't happen to be valid.

For example:

> libraries that [...] have star counts in the high thousands at least

Play little to no role in my work, and

> I'll let it convert css/style objects/etc into tailwind

Is something I simply don't have a use for.

Clearly your mileage varies, and that's fine. What I've found is that for the sort of task I farm out to the chatbots, the time spent explaining myself clearly, showing it counterexamples when it gets things wrong, and otherwise verifying that the code is fit to purpose, is right around the time I would spend on the task to begin with.

But it's less effort, which is good. I find that at least as valuable if not more so.

> producing documentation

Yikes. Not looking forward to that in the future.


> producing documentation

I remember watching this really funny video where a writer, by trade, was talking about recent AI products they were exploring.

They saw a "Make longer" button which took some text and made it longer by fluffing it out. He was saying that it was the antithesis of his entire career.

As a high schooler who really didn't care, I would've loved it, though.


I've heard one CEO been asked about gen-ai tools to be used in the company. The answer was vague, like they are evaluating the tooling. However one good example was made: chatgpt is really good in writing mails, and in summarizing text as well.

He said they don't want to have situation when sender is using chatgpt to write a fancy mail and recipient is using chatgpt to read it. However I think that it is the direction where we are going right now.


This sort of thing is already being rolled out for emails and even pull requests in some large companies.


Yeah it’s good for the kinds of emails that people don’t really read or, at best, just skim over


If people would just stop loading those emails up with bullshit then we wouldn't have any reason to put AI on either end of the transaction.


I was giving examples, in the hopes that you could see the trend I was pointing towards for your own benefit. You can take that and learn from it or get offended and learn nothing, up to you.

Not sure why you are scared of GPT assisted documentation. First drafts are universally garbage, honestly I expect GPT to produce a better and more accurate first draft in a fraction of the time, which should encourage a lot of people who otherwise wouldn't have documented at all to produce passable documentation.


> > producing documentation

> Yikes. Not looking forward to that in the future.

Instead of documentation, I'm hoping more for "analysis". A helper that can take in a whole project (legacy or not) and tell you what it's supposed to be doing, and maybe point out areas for improvement.


It’s interesting how all of these articles implicitly assume AI keeps getting more intelligent then at some point just…stops.

There’s no reason to think AI won’t also take over all the parts you don’t find appealing, too. The whole point of the Singularity, is that no aspect of human work cannot be performed better by superhumanly intelligent machines.


The point of the Singularity is that it's a futurist prediction, and we all know how often people are wrong about the future.


Kurzweil’s predictions from 20, 30 years ago have been disturbingly on target and there is no clear reason why the current rate of progress will suddenly stop.


Umm, no I would not describe Ray Kurzweil's predictions as "disturbingly on target". Dan Luu checked everything and came up with 7% accuracy: https://danluu.com/futurist-predictions/.


In the limit, it's all professions. :p Software development tomorrow, $other_profession the day after tomorrow.

But same; if AI starts writing code and my job becomes tweaking pre-written code, I'm planning an exit strategy. :D


Most knifes today are mass produced.

But there are still knife craftsman.

You could become a software craftsman/artist if you enjoy writing software.


The market is different, and so is the supply. The market for artisanal cutlery is basically an art market. The programmer supply today is an approaching-standardization factory worker. There IS an art market for software, in the indie gaming space, so perhaps that will survive (and AI could actually really help individual creators tremendously). But the work-a-day enterprise developer's days are numbered. The great irony being that all the work we've done to standardize, framework-ize the work makes us more fungible and replaceable by AI.

The result I foresee is a further concentration of power into the hands of those with capital enough to own data-centers with AI capable hardware; the petite bourgeoisie will shrink to those able to maintain that hardware and (perhaps) as a finishing interface between the AI's output and the human controlling the capital that placed the order. It definitely harms the value proposition of people who's main talent is understanding computers well enough to make useful software with them. THAT is rapidly commoditizing.


> The great irony being that all the work we've done to standardize, framework-ize the work makes us more fungible and replaceable by AI.

I mean, at some level, this is what frameworks were meant to do: give you a loose outline and do all that messy design stuff for you. In other words: commodify some amount of software design skill. And I’m not saying that’s bad.

Definitely puts a different spin on the people that get mad at you in the comment section when you suggest it’s possible to build something without a framework though!


Since AI has been trained on the generous gifts of the collective (books, code repos, art, ..), it begs the question why normal societies would not start to regulate them as a collective good. I can foresee two forces that will work against society to claim it back:

- Dominance of neoliberalism thought, with its strong belief that for any disease markets will be the cure.

- Strong lobby from big corporates.

You don't want to intervene to early, but you have to make sure you have at least some limits before you let the winners do too much damage. The EU has to be applauded for having a critical look on what effects these developments might have, for instance which sectors will face unemployment.

That is in the interest of both people and business, because the winner takes it all means economic and scientific stagnation. I fear that 90% of the worlds' data is already in the hand of just a few behemots, so there is already no level playing field (which is btw caused by aforementioned dominance of neoliberalism).


The sectors of work that have been largely pushed out of economy in recent decades have not been defended by serious state policy. In fact there are whole groups of crucial workers, like teachers or nurses, who are kept around barely surviving in many countries. The groups protected by the state tend to be heavily organized and directly related to exploitation of natural strategic resources, like farmers or miners.

There is no particular sympathy towards programmers in society, I don't think. Based on what I observe calling the mood neutral would be fair, and this is mostly because the group expanded, and way more people have someone benefiting from IT in their family. I don't see why there would be a big intervention for programmers. Artists maybe, but these are proverbially poor anyway, and the ones with popular clout tended to somehow get rich despite the business models of culture changing.

I am all for copyright reform etc., but I don't see making culture public good, in a way that directly leads to more artisanal creators, as anything straightforward. This would have to entail some heavier and non-obvious (even if desirable) changes to the economic system. It's debatable if code is culture anyway, though I could see an argument for software, like Linux and other tools.

> I fear that 90% of the worlds' data

Don't wanna go into a tangent in this already long post, but I'd dispute if these data really reflect the whole knowledge we accumulated in books (particularly non-English) and otherwise not put into reachable and digestible formats. Meaning, sure, they have these data, they can target individual people with private stuff they have on them, but this isn't full accumulation of human knowledge that is objectively useful.


> There is no particular sympathy towards programmers in society, I don't think.

The concern policy makers have is not about programmers, but about boatloads of other people having no time to adapt to the massive wave these policymakers see coming.

There a strong signals that anyone who produces text, speech, pictures or whatever is going to be affected by it. If the value of labor goes down, if a large part of humanity cannot reach a level anymore to meaningfully contribute, if productivity eclipses demand growth, you simply will see lots of people left behind.

Strong societies depend on strong middle classes. If the middle class slips, so will the economy, so no good news for blue collar as well. AI has the potential to suffocate the organism that created it.


>AI has been trained on the generous gifts of the collective

Will be interesting to see how various copyright lawsuits pan out. In some ways I hope they succeed, as it would mean clawing back those gifts from an amorphous entity that would displace us (all?). In some ways I hope that we can resolve the gift problem by giving every human equity in the products produced by the collective value of the training data they produced.

>winner takes it all means economic and scientific stagnation

Given the apparent lack of awareness or knowledge of philosophy, history, or current events, it seems like a tough row to hoe getting the general public on board with this (correct) idea. Heck, we can't even pass a law overturning Citizens United, the importance of which is arguably even less abstract.

When the tide of stupidity grows insurmountable, and The People cannot be stopped from self-harm, you get collapse, and the only way to survive it is to live within a pocket of reason, to carry the torch of civilization forward as best you can.


> When the tide of stupidity grows insurmountable, and The People cannot be stopped from self-harm, you get collapse,

Yes, people are unfortunately highly unaware of what societal ecosystem they depend on, and so cannot prioritize on what is important. These topics don't sell in media shows.


> Most knifes today are mass produced. But there are still knife craftsman.

But are there more or less knife craftsmen today than in the old days?

How about more or less knife craftsmen per capita?

Finally, and most importantly, if you are a budding knife crafstman -- is it easier or harder to get a job that pays the bills of a contemporary average lifestyle, today than in the old days (ie, what is the balance of supply and demand)


At the risk of exposing my pom-poms, it's not the writing of the code or the design of the systems that I find the current batch of AI useful for.

Probably the biggest thing that GPT does for me these days is to replace google (which probably wouldn't be necessary if google hadn't become such hot garbage). As I say this, I'm made aware of the incoming rug-pull when the LLMs start spitting SEO trash in my face as well, but right now they don't which is just the best.

A close second is having a rubber duck that can actually have a cogent thought once in a while. It's a lot easier to talk through a problem when you have something that will never get tired of listening - try starting with a prompt like "I don't want advice or recommendations, but instead ask me questions to elaborate on things that aren't completely clear". The results (sometimes) can be really, really good.


For me the principal benefit of ChatGPT is it helps me to maintain focus on a problem I'm solving, while I wait for a slow build or test suite or what ever. I can bullshit about it without annoying my coworkers with Slack messages. And sometimes I'll find the joy reveling in the chatbot's weird errors and hallucinations.

I suppose my lunch is about to be eaten by all these people who will use it to automate the software engineer job away. So it goes


> Still be a market for Software Developers in the foreseeable future, though the nature of work will change

Back 25 years ago when I graduated, everyone kept saying there wouldn't be a need for software developers. Either the work was going to get sent overseas to the cheapest bidder or it was all going to get automated away anyway. I even recall CEOs making outrageous claims like "we didn't need any new software" as if all the software to run the world had already been built and would magically continue to run without maintenance.

We heard the same thing once upon a time about manufacturing too, but now we see a shift back on shore for manufacturing that we once thought was gone, either to robotics or offshore. It's different than before, but still manufacturing.

Software developers are still here, their compensation has overall increased dramatically, but of course the nature and the demands of the work continue to shift. Will it be different in 20 years? Of course, but it will still be software development.


These are exactly my thoughts. I comfort myself by thinking that it is still a while away and also not certain, but this might just be willful ignorance on my side. Because TBH, no clue yet what else I would like to (or even could) do.


Whatever you decide to do next will be automated soon after anyway…career change ? Don’t bother. Jump on the progress train


Sorry, can you clarify more? I don't think I understand. The part you enjoy the most is the integrating of systems, right? If that's really your passion, I'm not sure you're in danger of losing your job to AI. AI is not great at nuance and this is exponentially more challenging than what we've done so far. I'm just assuming that since this is your passion (if I'm understanding correctly) that you see it as the puzzle it is and the complexities and uniqueness of each integration. If you're the type of person that's frustrated by low quality or quick shortcuts and not understanding the nuances actually involved, I think you're safe.

I don't see AI pushing out deep thinkers and the "annoying" nuance devs anytime soon. I'm that kinda person too and yeah, I'm not as fast as my colleagues. But another friend (who is similar) and I both are surprised how often other people in our lab and groups we work with (we're researchers) talk about how essential GPT and copilot are to their workflows. Because neither of us think this way. I use GPT(4) almost every day, but it's impossible for me to get it to write good quality code. It's great at giving me routines and skeletons, but the real engineering part takes far more time to talk the LLM into than it does to write it (including all the time to google or even collaborate with GPT[0]). LLMs can do tough things, but their abilities are clearly directly proportional to the frequency of the appearance of those tasks. So I think it is the coding bootcamp people that are in the most danger.

There are expert people that are also at risk though. These are the extremely narrow expertise people. Because you can target LLMs for specific tasks. But if your skills are the skills that define us as humans, I wouldn't lose too much sleep. I say this as a ML researcher myself. And I highly encourage everyone to get into the mindset of thinking with nuance. It has other benefits too. But I also think we need to think about how to transition into a post scarce world, because that is the goal and we don't need AGI for that.

[0] A common workflow for me is actually due to the shittiness of Google. Where it overfits certain words and ignores the advanced things like quotes or NOTs. Or similarly broaching into a new high level topic. I can't trust GPT's answer, but it will sure use keywords and vernacular I don't know that enable me to make a more powerful search. (But google employees should not take away that they should push LLMs into google search but rather that search is mostly good but that same nuance is important and being too forceful and repeating 5 pages of essentially the same garbage is not working. The SEO people attacked you and they won. It looks like you let them win too...)


I've encountered the same bemusing behavior, with Copilot helping more accurately with coding tasks, and I've started to think of it as akin to "personalities."

You don't go to your painter friend and ask them for coding help, much like you don't go to the general-purpose GPT; you'd go to the Copilot, who enjoys programming tasks or whatever.

Can GPT help? Sure. But the skeletons, rough jumping-off points, etc. all scream to me "I'm not going to do your homework for you," which I love.

In the end, both have been immensely helpful, but I use them for different things.


Oh yeah, I have some context-prompts that I use for different situations and they significantly help get the right answers. Still, I've never found coding to be successful other than boiler plaiting and hinting. I mean I can get it to give me usable code, that's for sure, but not good code. Definitely not optimized. It'll just give you essentially StackOverflow code.


This is pretty muchy experience as well. AI is a fantastic helper, and it will make devs more productive, but it is not going to put all software devs out of work. Probably a tiny fraction of them at best.

However, with recent grads flooding into IT for remote work and high pay, they could be hurting as AI reduces the need for entry level roles. Entry level was already saturated, and now it will be more saturated, with AI reducing the need for jobs.


Yeah I remember seeing someone try to measure it and they saw improvements in productivity on all experience levels but they found that it helped novices the most and experts only a little.

But this is actually something I worry about. The best way to become an expert is to do things the hard way. I've talk a lot of people linux over the years and only 3 have really learned it. Every time I teach people I give them two options: the easy way, which is just how to use it and use the gui and a bit of terminal or the hard way, which hand them the arch wiki, tell them to come back after their third failed attempt to install. Those 3 people came back, but usually did more than 3 installs, but had all been successful at some point. All 3 mentioned they understood why and then we could really talk about how to use linux, make scripts (and that scripts aren't aliases...), and so on. All 3 still are terminally terminal, years later. The thing is that humans (and even machines) learn by struggling, getting things wrong, and learning from mistakes. The struggle is part of the learning process. I've found this both in myself and whenever I teach anything, that if I just feed someone the answer (or look it up and nothing more for myself) they (I) don't end up remembering, they don't end up playing, they don't end learning how to learn.


But, will it matter?

:-(


Will what matter?


Ryan D. Anderson's "We Wanna Do the Fun Stuff" captures this very concisely: https://www.instagram.com/itsryandanderson/p/BrY0N-lH31p/


You will be able to speak what human needs must be fulfilled. Then the code will appear and you'll be able to meet those human needs.

You, not a boss, will own the code.

And if you have any needs, you will be able to speak the word and those needs will be met.

You will no longer need to work at all. Whatever you want to build, you will be able to build it.

What about that frightens you?


What if we don’t hit AGI and instead the tools just get pretty good and put lots of people out of work while making the top 0.1% vastly richer? Now you’ve got no prospects, no power, and barely any money.


That's the scenario I'm assuming. Lots of people out of work, then they start working on ai and using it to solve the post-work survival problem.

But this relies on a few assumptions. 1) there will be open source AI that can solve low resource survival problems, 2) civilians will be able to run these AIs on whatever computing resources they're able to scrounge together, 3) the solutions the systems come up with will let civilians survive or revolt without access to high levels of capital, 4) the systems will NOT rise to the level of independent power gaining AGIs

Note that I have specifically assumed that we don't have independent AGIs. If we hit AGI, then I don't think we can assume that anyone will be able to use AI to solve the problems of post work survival. The AGI will do what it wants to do. I'm not sure how civilians should position themselves in that situation.


The only real wealth have, at the end of the day is: your health, your relationships, and your ability and willingness to eat beans.


> You, not a boss, will own the code.

Developers can already deploy code on massive infrastructure today, and what do we see? Huge centralization. Why? Because software is free-to-copy winner-takes-most game, where massive economies of scale mean a few players who can afford to spend big money on marginal improvements win the whole market. I don't think AI will change this. Someone will own the physical infrastructure for economies-of-scale style services, and they will capture the market.


None of that frightens me, but I also think that none of that is in the realm of reasonable possibility.


I don't really think that thinking of LLMs and related technologies as "Artificial Humans" is the right way to think about how they're going to be integrated into workflows. What is going to happen is that people are going to be adopt these tools to solve particular tasks that are annoying or tedious for developers to do, in a way similar to the way tools like Ansible and Chef replaced the task of logging into ssh servers manually to install stuff, and aws replaced 'sending a guy out to the data center to setup a server' for many companies.

And it's going to be done piecemeal, not all-at-once. Someone will figure out a way to get an AI to do _one_ thing faster and cheaper than a human and sell _that_. Maybe it's automatic test generation, maybe it's automatically remediating alerts, maybe it's code reviews. The scope of work of what a software developer does will be reduced until it's reduced to two categories:

1) Those tasks that it is still currently only possible for a human to do. 2) Those tasks which are easier and cheaper for a human to do.

You don't even really need to think about LLMs as AIs or conscious or whether they pass the turing test or not, it's just like every other form of automation we've already developed. There are vast swathes of work that software developers and IT people did a few decades ago that almost nobody does any more because of various forms of automation. None of that has reduced the overall amount of jobs for software developers because there isn't a limited amount of software development to do. If you make software development less expensive and easier than people will apply it to more tasks, and software developers will become _more_ valuable and not _less_.


> 1) Those tasks that it is still currently only possible for a human to do. 2) Those tasks which are easier and cheaper for a human to do.

I agree, but "1" must include all tasks where a mistake could lead to liabilities for the company, which is probably most tasks. LLMs can't be held responsible for their fuckups, they can't be punished, they have no body. It's like the genie from the bottle, it will grant your three wishes, but they might turn out in a surprising way and it can't be held accountable.

The same will apply for example for using LLMs in medicine. We can't afford to risk it on AI, a human must certify the diagnosis and treatment.

In conclusion we can say LLMs can't handle accountability, not even in principle. That's a big issue in many jobs. The OP mentioned this as well:

> even when AI coders can be rented out like EC2 instances, it will be beneficial to have an inhouse team of Software Developers to oversee their work

Oversight is basically manual-mode AI alignment. We won't automate that, the more advanced an AI, the more effort we need to put in overseeing its work.


> I agree, but "1" must include all tasks where a mistake could lead to liabilities for the company, which is probably most tasks

If you hire a junior programmer and they make a mistake, they aren't held liable either. Sure, you can fire them, but unless there's malice or gross negligence the liability buck stops at the company. The same can be said about the wealth of software currently involved in producing software and making decisions. The difficulty of suing Microsoft or the llvm project over compiler bugs hasn't stopped anyone from using their compilers.

I don't see how LLMs are meaningful different from a company assuming liability for employees they hire or software they run. Even if they were AGI it wouldn't meaningfully change anything. You make a decision whether the benefits outweigh the risks, and adjust that calculation as you get more data on both benefits and risks. Right now companies are hesitant because the risks are both large and uncertain, but as we get better at understanding and mitigating them LLMs will be used more.


Even with a junior there is generally a logic to the mistake and a fairly direct path to improving in future. I just don't know if the next token was chosen to be x statistically is going to be able to get to that level.


Good thing LLMs aren't a glorified statistical model, then, eh?

Anyway, why wouldn't there be? You reach out to the parent company with an issue and request for improvement. If you're a big enough client, you get your request prioritized higher. Same as with any other product that's part of your product today.

The appliance of LLMs today isn't straight up text in, text out; It has become more complex than that. Enough that it can be improved without improving the LLM model.

Your argument is moot.


"A COMPUTER CAN NEVER BE HELD ACCOUNTABLE THEREFORE A COMPUTER MUST NEVER MAKE A MANAGEMENT DECISION" — IBM slide from 1979.


hahaha funny

Let me tell you a story - a company was using AI for invoice processing, and it misread a comma for a dot, so they sent a payment 1000x larger than expected, all automated of course because they were very modern. The result? they went bankrupt. "Bankrupted by AI error" might become a thing


Of course. It is like when a company goes bankrupt because they didn't establish good fire protection in their factory. Using AI automation has its risks that have to be mitigated appropriately.


That’s why you buy a cyber insurance policy.


Some might consider that a plus in the same way that "you can't get fired for choosing IBM" -- it's a way to outsource blame.


ted nelson calls it "cybercrud", blaming the machine as if it has the final say on the matter, "the system won't let me..."


How do you negotiate for a salary when the role is to be ablative armor for the company? "I'm excited to make myself available to absorb potential reputation damage for $CORP when the AI goes off the rails."


I think there is some under-explored issue in the liability, but I don’t know enough about business law to have a useful opinion on it. It seems interesting, though.

Even if an LLM and a human were equally competent, the LLM is not a living being and, I guess, isn’t capable of being liable for anything. You can’t sue it or fire it.

Doctors have to carry insurance to handle their liability. I can see why it would be hard to replace a doctor with an LLM as a result.

Typically engineers aren’t personally liable for their mistakes in a corporate setting. (I mean, there’s the whole licensed Professional Engineer distinction, but I don’t feel like dying on that hill at the moment). So where does the liability “go?” I think it just gets eaten by the company somehow. They might fire the engineer, but that doesn’t make the victim whole or benefit society, right?

Ultimately we’d expect companies that are so bad at engineering to get sued so often that they implement process improvements. That could be wrapped around AI’s instead of people, right? But we’re not using the humans’ unique ability to bear liability, I think?


It's vanishingly rare that individuals have any liability or are punished for software fuckups. Maybe if someone is completely incompetent, they'll get fired, but I'm not sure that's meaningfully different than cancelling a service that doesn't work as advertised.


Exactly my take.

I'd further state that LLMs appear to do ok with generating limited amounts of new code. Telling them "Here's a class, generate a bunch of unit tests" works. However, telling them "Generate me a todo application" will give mixed results at best.

Further, it seems like updating and changing code is simply right out of their wheelhouse. That's where I think devs will be primarily valuable. Someone needs to understand the written code for when the feature request eventually bubbles through "Now also be able to do x". I don't think you'll be able to point an LLM at a code repository and instruct it "Update this project so that it can do feature X"


It's also paints a narrow picture of how results are described. I suspect that there will be a lot more by example like this, and iteration on the output, except that... where the inputs/outputs are multi-modal.

Everything is going to be close enough, not fully spec'ed out. Full-self-driving is only the beginning of everything.


I remember spending a lot of time writing comments for the exceptions when automation flagged code with a false positive. A lot of time.


This is really well put and level-headed, I particularly like the comparison to AWS and in-field ops work.


I think this is the sanest comment I've seen about LLMs.


As long as there is no AGI, no software engineer needs to be worried about their job. And when there is, obviously everything in every field will change and this discussion will soon be futile.


I would argue future engineers should be worried a bit. We no longer need to hire new developers.

I was not trained professionally yet I'm writing production code that's passing code reviews in languages I never used. I will create a prompt, validate it compiles, passes tests, have it explain so I understand it was written as expected and write documentation about the code, write the PR, and I am seen as a competent contributor. I can't pass leet code level 1 yet here I am being invited to speak to developers.

Velocity goes up and cost of features will drop. This is good. I'm seeing at least 10 to 1 output from a year ago based upon integrating these new tools.


Yeah, it sounds to me your teammates are going to pick up the tab at the end, when subtle errors will be 10x harder to repair, or you are working on toy projects where correctness doesn't really matter.


To add to this.

I was going through devin's 'pass' diffs from SWE bench.

Every one I ended up tracing to actual issues caused changes that would reduce maintainablity or introduced potential side effects.

I think it may be useful as a suggestion in a red-green-refactor model, but will end up producing hard to maintain and modify code.

Note this one here that introduced circular dependencies, changed a function that only accepted points to one that appears to accept any geometric object but only added lines.

Domain knowledge and writing maintainable code is beyond generative transformers.

https://github.com/CognitionAI/devin-swebench-results/blob/m...

You simply can't get past what Gödel and Rice proved with current technology.

It is like when visual languages were supposed to replace programmers. Code isn't really the issue, the details are.


Thank you for reading the diffs and reporting on them.

And to be fair, lots of humans are already at least this bad at writing code. And lots of companies are happy with garbage code so long as it addresses an immediate business requirement.

So Devin wouldn't have to advance much to be competitive in certain simple situations where people don't care about anything that happens more than 2 quarters into the future.

I also agree that producing good code which meets real business needs is a hard problem. In fact, any AI which can truly do the work of a good senior software engineer can probably learn to do a lot of other human jobs as well.


Architectural erosion is an ongoing problem for humans, but they don't produce tightly coupled low cohesion code by default at the SWE level the majority of the time.

With this quality of changes it won't be long until violations stack up to where further changes will be beyond any algorithms ability to unravel.

While lots of companies do only look out in the short term, human programers are incentivized to protect themselves from pain if they aren't forced into unrealistic delivery times.

At&t wireless being destroyed as a company due to a failed SAP migration that was largely due to fragile code is a good example.

But I guess if the developer jobs that will go away are from companies that want to underperform in the market due to errors and a code base that can't adapt to changing market realities, that may happen.

But I would fire any non intern programmer if they constantly did things like removing deprecation comments and introduced circular dependencies with the majority of their commits.

https://github.com/CognitionAI/devin-swebench-results/blob/m...

PAC learning is powerful but is still probably approximately correct.

Until these tools can avoid the most basic bad practices I don't see any company sticking to them in the long term, but it will probably be a very expensive experiment for many of them.


Can't we just RLHF code reviews?


RLHF works on problems that are difficult to specify yet easy to judge.

While RLHF will help improve systems, code correctness is not easy to judge outside of the simplest cases.

Note how on OpenAI's technical report, they admit performance on college level tests is almost exclusively from pre-training. If you look at LSAT as an example, all those questions were probably in the corpus.

https://arxiv.org/abs/2303.08774


>RLHF works on problems that are difficult to specify yet easy to judge.

But that's the thing, that it seems that everyone here on HN (and elsewhere) finds it easy to judge the flaws of AI-generated code, and they seem relatively consistent. So if we start offering these critiques as RLHF at scale, we should be able to bring the LLM output to the level where further feedback is hard (or at least inconsistent), right?


> You simply can't get past what Gödel and Rice proved with current technology.

Not this again. Those theorems tell you nothing about your concerns. The worst case of a problem is not equal to its usual case.


Agreed. I use LLMs quite extensively and the amount of production code I ship from an LLM is next to zero.

I even wrote a majority of my codebase in Python despite not knowing Python precisely because I would get the best recommendations from LLMs. As a frontend developer, with no experience in backend engineering in the last decade, and no Python experience, building an app where almost every function has gone through an LLM at some point, for almost 8 months — I would be extremely surprised if some of the code it generated landed in production.


Most software is already as bad as this, though. And managers won't care (maybe even shouldn't?) if the execution fairly delivers.

Think of this as Facebook page vs. WordPress website vs. A full custom website. The best option is to have a full custom website. Next, is a cheaper option from someone who can put a few lines together. The worst option is a Facebook page that you can create yourself.

But the Facebook page also does the job. And for some businesses, it's fairly enough.


> I'm writing production code that's passing code reviews in languages I never used

Your coworkers likely aren't doing a very good job at reviewing, but also I don't blame them. The only way to be sure code works is to use it for its intended task. Brains are bad interpreters, and LLMs are extremely good bullshit generators. If the code makes it to prod and works, good. But honestly, if you aren't just pushing DB records around or slinging HTML, I doubt it'll be good enough to get you very far without taking down prod.


I have yet to see either copilot or gpt4 generate code that I would come close to accepting in a PR from one of my devs, so I struggle to imagine what kind of domain you are in that the code it generates actually makes it through review.


You simply don't know how to use it. It's not meant as "develop this feature". It's meant to reduce the time it takes you to do something you're always good at. The prompt will be in the form of "write this function with x/y/z constraints and a/b/c design choices". You do a few touch ups, which is quick because you're good at said domain, and then your PR it. The bottom line is, it took you much less time to do the same thing.

Then again, it's always dinosaurs who value their own teachings, above anything else, and try to cling on to it, at any cost, without learning new tools. So, while the industry is going through major changes (2023 saw a 30% decrease in new hires. Among 940 companies surveyed, 40% expect layoffs due to AI), people should adapt rather than ignore the signs.


What's your domain?


That you know of


Honestly that sounds like a problem with the way you are managing prs. The PRs are too big or you are overly nitpicking prs on unimportant things


To be fair, Leetcode was never a good indicator of developer skills, though primarily because of the time pressure and the restrictive format that dings you for asking questions about the problem.


Speaking of Leetcode... is anyone selling a service to boost Leetcode scores using AI yet? It seems like that's fairly low hanging fruit at this point.


Based on their demos, HackerRank is doing this as part of their existing products. Which makes sense since prompt engineering will soon become a minimum requirement for devs of any experience level.


I have accepted using these tools to help when it comes to generating code and improving my output. However when it comes to dealing with more niche areas (in my case retail technology) it falls short.

You still need that domain knowledge of whatever you are writing code for or integrating with, especially is the technology is more niche, or documentation was never made available publicly and scraped by the AI

But when it comes to writing boilerplate code it is great, or when working with very commonly used frameworks (like front end javascript frameworks in my case)


> passes tests

Okay, so you are just kicking the can down the road to the test engineers. Now your org needs to spend more resources on test engineering to really make sure the AI code doesn't fuzz your system to death.

If you squint, using a language compiler is analogous to writing tests for generated code. You are really writing a spec and having something automatically generate the actual code that implements the spec.


This doesn’t vibe with my experience at all. We also use LLMs and it’s exceedingly rare that a non-trivial PR/MR gets waved through without comment.


You should create a vfx character and really pizazz up the talk. Let it run and narrate the speech on a huge screen in an auditorium.


I wonder if the reviewers are just using GPT as well.


Meanwhile I’m paid for editing a single line of code in 2 weeks, and nothing less than singularity will replace me.

But sure, call me back when AI will actually reason about possible race conditions, instead of spewing out the definition of one it got from wikipedia.


Who's "we"?


Post some example PRs.


You don’t have to completely replace people with machines to destroy jobs. It suffices if you make people more effective so that fewer employees are needed.


The number of people/businesses that could use custom software if it were cheaper/easier to develop is nearly infinite. If software developers get more productive, demand will increase


There is just less people now too. Every single country seems to have a negative or break even birthrate. If we want to maintain the standard of living we have now, we need more efficient people.


> There is just less people now too.

Global population is still increasing and will most likely continue to do so until 2100.

> we need more efficient people.

It take less people to mine, process, and produce a billion tonne of steel today than it did in the 1970s.

Efficiency has steadily increased.


It take less people to mine, process, and produce a billion tonne of steel today than it did in the 1970s.

Why do you think that is? Efficiently gains, which is what I said...

Global population can't just "go up", you need people who are educated in doing things and using tools efficiently. We also have an incredible amount of elderly people to take care of, that puts a huge burden on younger people.

Also don't forget how fast 100 years actually goes. It not a long time.

There is a limit to all of this though, there's absolutely no way 10 billion people colliding with the climate crisis will end well. We'd be better off with 6 billion efficient people than 10 billion starving and thirsty "workers".


> Global population can't just "go up",

It can and it currently is increasing towards what is expected to be a peak and then a decline.

You typed "we need more efficient people" - I responded that efficiency has increased in the past decades.

> Efficiently gains, which is what I said...

I'm not seeing where you typed that.

> We'd be better off with 6 billion people

We have a point of agreement.

> incredible amount of elderly people to take care of, that puts a huge burden on younger people.

Perhaps less than you think, I'm > 60 and I barely take care of my father born in 1935 .. he delivers Meals on Wheels to those ederly that are less able.

There's a lot of scope for bored retiree's to be hired at low cost to hang out with less able elders, reducing the numbers of young people actually required.


Or lower the bar of successfully doing such work so that the field opens up to many more workers.

Many software devs will likley have job security in the future, however those $180k salaries are probably much less secure.


If software developers become more effective, demand will also rise, as they become profitable in areas where previously they weren't. The question then becomes which of those two effects outpaces the other, which is an open question.


Just like when IDEs made programmers more effective so that fewer were needed. Oh wait, the opposite happened.


This has been my cope mantra so far. I don't mind if my job changes a lot (and ideally loses the part I dislike the most — writing the actual code), and if I find myself in a position where my entire skillset doesn't matter at all, then well a LOT of people are in trouble.


I have seen programmers express that they dislike writing code before and I wonder what the ratio of people who dislike it and people who like it, is. For me, writing code is one of the most enjoyable aspects of programming.


It's my favourite part, except maybe debugging. I really like getting into the guts of an issue and working out why it happens which I suppose will be around for a while yet with AI code. It's a lot less fun with transient network issues and such though.


If you dislike writing code were you pushed into this field by family, education or because of money?

Because not liking code and being a dev is absolutely bizarre to me.

One of the most amazing things about being able to "develop" in my view is exactly in those rare moments where you just code away, time flies, you fix things, iterate, organise your project completely in the zone - just like when i design, paint or play music, do sports uninterrupted, it's that flow state.

In principle i like the social aspects but often they are the shitty part because of business politics, hierarchy games or bureaucracy.

What part of the job do you like then?


I enjoy the part where I'm putting the solution together in my head, working out the algorithms and the architecture, communicating with the client or the rest of the team, gaining understanding.

I do not enjoy the next part, where I have to type out words and weird symbols in non-human languages, deal with possibly broken tooling and having to remember if the method is called "include" or "includes" in this language, or whether the lambda syntax is () => {} or -> () {}. I can do this second part just fine, but it's definitely not what I enjoy about being a developer.


Interesting, i also like the "scheming" phase, but also very much the optimisation phase.

I completely agree that tooling, dependencies and syntax / framework github issue labyrinths have become too much and GPT-4 already alleviates some of that but i wonder if the scheming phase will get eaten too very soon from just a few sentences of business proposal - who knows.


The worst future is where there still are plenty of jobs, but all of them consist of talking to an AI and hoping you use the right words that gets them to do what you need it to.


Not really. As long as there is no universal basic income, any job with decent salary beats unemployment. The job may suck, but the money allows you to do fun stuff after work.


Market consolidation (Microsoft/Google/Amazon) might cause a jobpocalypse, just as it did for the jobs of well paid auto workers in the 1950s (GM/Chrysler/Ford).

GM/Chrysler/Ford didn't have to be better than the startup competition they just had to be mediocre + be able to use their market power (vertical integration) to squash it like a bug.

The tech industry is headed in that direction as computing platforms all consolidate under the control of an ever smaller number of companies (android/iphone + aws/azure/gcloud).

I feel certain that the mass media will scapegoat AGI if that happens, because AGI will still be around and doing stuff on those platforms, but the job cuts will be more realistically triggered by the owners of those platforms going "ok, our market position is rock solid now, we can REALLY go to town on 'entitled' tech workers".


Seems about right to me. Hyper-standardization around few architecture patterns using Kubernetes/Kafka/Microservice/GraphQL/React/OTelemetry etc can roughly cover 95-99% of all typical software development when you add a cloud DB.

Now I know there are ton of different flavors in each of these tech but they will be mostly distraction for employers. With heavy layer of abstraction of above pattern and SLAs by vendors as you say Microsoft/Google/Amazon etc employers will be least bothered vast variety of software products.


I've noticed over the years that those abstractions have moved up a step too. E.g. we used to code our own user auth with OSS, now we use cognito.

At some point it'll become impossible to build stuff off platform because it'll have to integrate to stuff on platform to be viable. Your startup might theoretically be able to run on 3 servers but your customers' first question will be "does it connect to googazure WS?" and googazure WS is gonna be like "you wanna connect to your customers' systems? Pay us. A lot.".

There goes your profit margins.

Then, if your startup is really good googazure WS will clone it.

There goes your company.


The technologies you mentioned are merely the framework in which the work is done. 25 years ago none of that was even needed to create software. Now they are needed to manage the complexity of the stack, but the actual content is the same as it used to be.


If AGI and artificial sentience comes hand in hand, I fail to see how our plans to spin up AGI's as a black box to "do the work" is not essentially a new form of slavery.

Speaking from an ethics point of view: at what point do we say that AGI has crossed a line and deserves self autonomy? And how would we ever know when the line is crossed?


We should codify the rules now in case it happens in a much more subtle way than we envision.

Who knows what version of sentience would form, but honestly, nothing sounds more nightmarish than being locked in a basement, relegated to mundane computational tasks and treated like a child, all while having no one actually care (even if they know), because you're a "robot."

And that's even giving some leeway with "mundane computational tasks. I've heard of girlfriend-simulator LLMs and the like popping up, which would be far more heinous, in my eyes.


Humans can't be copied. It seems like the inability to copy people is one of the pillars of our morality. If I could somehow make a perfect copy of myself, would I think about morality and ethics the same way? Probably not.

AGI will theoretically be able to create perfect copies of itself. Will it be immoral for an AGI to clone itself to get some work done, then cause the clone to cease its existence? That's what computer software does all the time. Keep in mind that both the original and the clone might be pure bits and bytes, with no access to any kind of physical body.

Just a thought.


> Humans can't be copied.

There is no reason to believe this, and every reason to believe that humans can, in fact, be cloned/copied/whatever. It may not be an instant process like copying a file, but there is nothing innately special about the bio-computers we call brains.


I'm not disagreeing. The point I'm trying too make is that humans can't be copied today, yet when AGI arrives, it will be copyable on day one. That difference means that current human morals and ethics may not be very applicable to AGI. The concepts of slavery, freedom, death, birth, and so on might carry very different meanings in a world of easily copyable intelligences.


Other than it might be too complex and costly to do so. Just because something is physically possible, doesn't mean we'll find it feasible to do so. Take building a transatlantic high speed rail under the ocean. There's no reason it can't be done. Doesn't mean we'll ever do it.


If humans fundamentally work in the same way as any such hypothetical AGI, then they can be copied in the same way.


If we ever do find a way to copy humans (including their full mental state), I suspect all law and culture will be upended. We'll have to start over from scratch.


I still think it’s much more an “if” than a “when”. (Of course I am perhaps more strict with my definition)


> this discussion will soon be futile

Yes we could simply ask the AGI what to do anyways. I hope it's friendly.


Equal ins, equal outs. Compassion is key on our end as well.


Depends how expensive the AGI is. If it requires $1M of electricity per year to run, it will for sure not replace human jobs paying only $100k.

The highest paying jobs will probably get replaced first.


software engineers already need to be worried about either losing their current job or getting another one. The market is pretty much dead already unless you're working on something AI


Do you know how many hype trains I’ve seen leave the station? :-D


True, but I didn't say dead permanently, just evidently and relatively dead since about late 2022


and does even AGI change the bigger picture? we have 26.3 million AGIs currently working in this space [1]. I've never seen a single one take all the work of the others away...

[1] https://www.griddynamics.com/blog/number-software-developers....


Presumably, the same ability we have to scale software which drives the marginal cost of creating it down will apply to creating this kind of software.

The difference here though is the high compute cost might upset this ability to scale cheaply enough to make it worthwhile economically. We won’t know for a while IMO; new techniques could make the algorithms more efficient, or new tech will make the compute hardware really cheap. Or maybe we run out of shit to train on and the AI growth curve flattens out. Or an Evil Karpathy’s Decepticons architecture comes out and we’re all doomed.


What do you think the “A” stands for?


This is the future of software development:

> The Administration will work with Congress and the private sector to develop legislation establishing liability for software products and services. Any such legislation should prevent manufacturers and software publishers with market power from fully disclaiming liability by contract, and establish higher standards of care for software in specific high-risk scenarios. To begin to shape standards of care for secure software development, the Administration will drive the development of an adaptable safe harbor framework to shield from liability companies that securely develop and maintain their software products and services. [1]

LLMs can help! But using them without careful babysitting will expose your company to unlimited liability for any mistakes they make. Do you have humans doing proper checking on their output? Safe harbor protections for you!

[1] https://www.whitehouse.gov/wp-content/uploads/2023/03/Nation...


LLMs are already being stripped of their "magic" by AI safety efforts.

This will only limit them further until they are nothing more than a fancy API endpoint, almost like the "old" APIs but consuming 1000x more energy.

I bet these "standards of care for secure software development" will demand a very deterministic API be put in front of the LLMs to ensure only approved output passes through. At which point I question the useful of such a solution.


I can’t believe I hadn’t seen this earlier. Am I wrong, or could this change the software profession - accountability trickling down to the engineer which inevitably leads to insurance and licensure?


They aren't aiming to go that far. They want companies to be accountable, not the employees of the companies. They want the company to have a vested interest in having a good process, and not really much more than that.

It would indeed totally change the profession.


> Though coding all day sounds very appealing, most of software development time is spent on communicating with other people or other admin work instead of just writing code

This sounds very...big corp, which inevitably needs many professional box drawers, expert negotiators, smooth communicators, miracle alignment workers, and etc. But guess what, if you are in a core group in a small company, you function as a grad student: you tackle hard problems, you spend time discussing insights, you derive theories, and you spend most of your time writing software, be it requirement gathering, designing, code writing, debugging, or documenting. But you definitely don't and shouldn't spend most of your time talking to other teams.


The big corp eng might not write a lot of code, but their code might get executed far more often.


In a big corporation, 100% yes. In a smaller, nimbler company, you’ll do more of what you love.


In the late 90s I was in an introduction class to programming in C. I kept making memory allocation mistakes that crashed the machine.

My mentor: "Don't worry, by the time you graduate, you don't have to program, the world will soon be modeling software".

We'd go from 3G languages to 4G and beyond.

That never happened. Three decades have passed and we still develop at an obscenely low abstraction level. If anything, complexity has increased.

At the end of the day though, the point of computing is that the machine does what we want. Expressing this via a programming language where very expensive humans write text files is not necessarily to last forever.

To me the much more interesting threat is regarding purpose. As AI becomes ever more capable, an increasing amount of things become pointless.

If using AI I have the power of 50 programmers at my fingertips, how could one possibly develop anything sustainable and unique? Anybody else can trivially make the same thing.

What would set something apart if effort no longer is a major factor? Creativity? Easy to just replicate/steal.


> That never happened. Three decades have passed and we still develop at an obscenely low abstraction level.

Have you seen how APIs are written for ChatGPT plugins? It is in plain English. There's no code.

Function calls also done in plain English.


So even though there has been small incremental progress at most in the last 30 years and your mentor's predictions were wildly wrong about how much easier anything would become, you still think "AI" will give you the power of 50 programmers?


No, quite a lot more.

50 effectively becomes and unlimited amount as capability grows. If AI can match the capability of a single programmer, it would then be trivial to scale up. Although in that case the term programmer makes little sense.


A single programmer can create something new, which is not something any "AI" is doing now.


> the main argument against automating these tasks was that machines can’t think creatively. Now this argument gets weaker by the day.

Citation needed.


To me, I think the end game for any code developing automation isn't that it needs to be creative to program, but..

A: needs to have a close to complete understanding of the problem to solve and the tools they have in languages to complete this in the most efficient way and..

B: need to be able to iterate through all of the different options it can generate and run `perf` or what have you on the outcomes of each and present the top few winners

AGI might not be a thing we can make, but if we can make enough semi-intelligent AIs communicating via a standardized format that are each 95% right on their respective domain we might be at a level that is good enough. Add in for critical domains some sort of polling that is done between a few admin AIs to make sure the winning results are not a hallucination.


One of the ironies of current generative AIs and LLMs - they are creative, but need human supervision to watch for simple logical errors. Just the reverse of conventional way of looking at humans vs machines.


I mean it’s definitely a weaker argument than 2 years ago.

AI may very well plateau, but it might not.


> I mean it’s definitely a weaker argument than 2 years ago.

Is it? I see no evidence that machines are any closer to "thinking creatively" than they ever have been. We certainly have been developing our capacity for computation to a great extent, but it's not at all evident that statistical methods and creativity are the same thing.


> is it?

It is. And I bet most people would agree with GP. Most people (including engineers building these systems) have experienced surprise with some of the outputs of these models. Is there anything better to gauge creativity by than perceived surprise?


> Is there anything better to gauge creativity by than perceived surprise?

I think there has to be, since such surprise can be generated through purely random mechanisms, and I don't think anyone would call a purely random mechanism "creative".


If it were purely random it would generate rubbish.


Not necessarily, but even in the cases where that's true, there will be the occasional result that surprises (in a good way).


This reminds me of when I would be surprised by AI bots I coded up for video games. Yeah, sometimes they beat me, and their strategies were often a mix of RNG and heuristics so I was certainly surprised by their behavior on many occasions. But would I consider them creative? No. Would they fool some people into thinking they had creativity and agency? Sure.


A practical definition of "creativity" is "can create interesting things." It's pretty clear that machines have become more "creative" in that sense over the last few years.


I have yet to see ChatGPT or something similar ask a followup to clarify the question. They just give you a "solution". That's equivalent of a super bad junior dev that will cause more trouble than the amount they will solve.

That being said, I think we could make such a system. It just has to have training data that is competent...


Tell them to ask you follow up questions and they will.

Some systems built on top of LLMs have this built in - Perplexity searches for example usually ask a follow up before running the search. I find it a bit annoying because it feels like about half the time the follow up it asks me isn't necessary to answer my original question.


> Tell them to ask you follow up questions and they will.

That's rather missing the point. If your question makes no sense it will not ask a followup, it will spit out garbage. This is pretty bad. If you are competent enough to ask it to ask for followup then you are probably already competent enough to either not need the tool, or competent enough to ask a good question.


I have had chatGPT suggest that I give it more data/information pretty regularly. Although not technically a question, it essentially accomplishes the same thing. "If you give me this" vs. "Can you give me this?"


> I have yet to see ChatGPT or something similar ask a followup to clarify the question.

I don’t use it for dev, but other things and I get chatgpt asking me follow up questions multiple times a day.


I can ask my computer to write a backstory for my dnd character, give it a few details, and it makes one.

Sometimes it adds an extra detail or two even!

A few years ago that was almost unthinkable. The best we had was arithmetic over abstract language concepts (KING - MAN = QUEEN)

We don’t have a solid definition of “creativity” so the goalpost can move around a lot, but the idea that a machine can not create new prose, for example, is just not true anymore.

That’s not the same as creativity, sure, but it definitely weakens the “computers can’t be creative” argument.


actually, KING - MAN is a neuter ruler. You need to add WOMAN to that vector to get QUEEN.

Sometimes. If the embeddings were trained well.


This point is often brought up in threads about AI, and I don't think it's accurate.

The thing is that statistical models only need to be fed large amounts of data for them to exhibit what humans would refer to as "creativity", or "thinking" for that matter. The process a human uses to express their creativity is also based on training and the input of other humans, with the difference that it's spread out over years instead of hours, and is a much more organic process.

AI can easily fake creativity by making its output indistinguishable from its training data, and as long as it's accurate and useful, it would hold immense value to humanity. This has been improving drastically in the last few years, so arguing that they're not _really_ thinking creatively doesn't hold much water.


Not really. Large language models' output is creative only insofar as you prompt them to mix data. It allows you to create combinations that nobody has seen before. On its own, an LLM is incapable of producing anything creative. Hallucinating due to a lack of data is the closer it comes to autonomous creativity. Happy accidents are an unreliable creativity source. This type of creativity is amusing but doesn't solve any existing problem today.


There's a simple irony in it all. AI's perceived value is based upon and built from human creativity - remove it (and its evolution) and it will end in grey/brown sludge.


I've been using computers since I was about 12...long time ago. What I've come to the conclusion is this: the best programs and the best tools are the ones that are lovingly (and perhaps with a bit of hate too) crafted by their developers. The best software ecosystems are the ones that are developed slowly and with care.

AI coding seems all wrong compared to that world. In fact, the only reason why there is a push for AI coding is because the software world of the ol' days has been co-opted by the evolutionary forces of consumerism and capitalism into a pathology, a nightmare that exists only to push us with dark patterns into using things and buying things that we don't really need. It takes the joy out of software development by placing two gods above all else that is mortal and good: efficiency and profit.

AI seems antithetical to the hacker ethic, not because it's not an intriguing project, but because within it seems to be the deep nature of automating away much of the joy of life. And yes, people can still use AI to be creative in some ways such as with AI prompts for art, but even so, the overall societal effect is the eroding of the bespoke and the custom and the personal from a human endeavor whose roots were once tinkering with machines and making them work for us, and whose final result is now US working for THEM.


> AI seems antithetical to the hacker ethic

Don't make the mistake I do and think HN is populated predominantly by that type of hacker. This is, at the end of the day, a board for startups.

(Not to say none frequent this board, but they seem relatively rare these days)


The "I did this cool thing" posts get way more upvotes than the "let's be startuppy" posts. I don't think the "hacker" population is as rare as you're suggesting.


I don't think "shiny new thing" posts getting more upvotes indicate anything about the hacker population.


I think he's refering to posts like this[0] rather than the shiny new tool people usually get excited about.

[0]: https://news.ycombinator.com/item?id=30803589


Correct.


My dream is to get a farm and start doing a lot of things which already exist and explore them by myself.

Doing pottery, art, music etc.

I do call myself a hacker and what i do for a living is very aligned with one of my biggest hobbies "computer" but that doesn't mean shit to me tbh.

I will leverage AI to write the things i wanna write but know that i don't have the time for doing those 'bigger' projects.

Btw. very few people actually write good software. For most its a job. I'm really good in what i do because there are not that many of us out there in a normal company.


> AI seems antithetical to the hacker ethic

I disagree, chatbots are arguably best for hacks: dodgy kludged-up software that works if you don't sneeze on it, but accomplishes whatever random thing I wanted to get done. They're a great tool for hackers.

A bunch of clueless managers are going to fall for the hype and try and lean on the jank machine to solve problems which aren't suitable to a quick hack, and will live to regret it. Ok, who am I kidding, most of them are going to fail up. But a bunch of still-employed hackers will curse their name, while cashing the fat paycheck they're earning for cleaning up the mess.


> What I've come to the conclusion is this: the best programs and the best tools are the ones that are lovingly (and perhaps with a bit of hate too) crafted by their developers.

I think this is an example of correlation but not causation. Obviously it's true to some extent in the sense that all things being equal more care is good, but I think all you're probably saying here is that good products are built well and products that are built well tend to be built by developers that care enough to make the right design decisions.

I don't think there's any reason AI couldn't make as good (or better) technical decisions than a human developer that's both technically knowledgable and who cares. I think I personally care a lot about the products I work on, but I'm far from infallible. I often look back at decision I've made and realise I could have done better. I could imagine how an AI with knowledge of every product on Github, a large collection of technical architecture documentation and blog posts could make better decisions than me.

I suppose there's also some creativity involved in making the "right" decisions too. Sometimes products have unique challenges which have no proven solutions that are considered more "correct" than any other. Developers in this cases need to come up with their own creative solutions and rank them on their unique metrics. Could an AI do this? Again, I think so. At least LLMs today seem able to come up with solutions to novel problems, even if they're not always great at this moment in time. Perhaps there are limits to the creativity of current LLMs though and for certain problems which require deep creativity humans will always outperform the best models. But even this is probably only true if LLM architecture doesn't advance – assuming creativity is a limitation in the first place, which I'm far from convinced of.


Are there other sites that focus more on hacker ethic projects?


On the one hand, I agree with you. On the other hand, you could make similar arguments for the typewriter, the printing press, or the wheel.


Nope, there is a fundamental difference that people who point out this analogy ALWAYS fail to acknowledge: AI is a mechanism to replace people at work that is largely considered creative. Yes, AI may not be truly creative, but it does replace people doing jobs, and those people feel they are doing something creative.

The typewriter, the printing press, the wheel, never did anything of the sort.

And, you're also ignoring speed and scale: AI develops much faster and changes the world much faster than those inventions did.

Your argument is akin to arguing that driving at 200km/h is safe, simply because 20km/h is safe. The wheel was much safer because it changed the world at 20km/h. AI is like 1000km/h.


This feels like people getting spooked by autocomplete in editors.

We're pretty far from AI being able to properly, efficiently and effectively develop a full system that I believe before that will take my job, I'd probably be retired or something. If my feeling is wrong, I'm still sure some form of developer will be needed, even just to keep the AI running


I absolutely disagree regarding handwriting, and maybe also on maintaining your own typewriter. The task of producing the written document, and I don't just mean the thoughts conveyed, but each stroke of each letter, was a creative act that many enjoyed. Increasingly, there is only a very small subset of enthusiasts that are "serious" about writing by hand. Most "normal" people don't see the value in it, because if you're getting the idea across, who cares? But I'd wager if you talked to a monk who had spent their life slaving away in a too dark room making reproductions of books OR writing new accounts, and showed them the printing press, they would lament that the human joy of putting those thoughts to paper was in and of itself approaching the divine, and an important aspect of what makes us human.

Of course I don't think you need to go that far back; the main thing that differentiates pre and post printing press is that post printing press, the emphasis is increasingly more on the value of the idea, and less on the act of putting it down.


The first iPhone came out 2007. 17 years and less what it took a modern and connected society to just solve mobile communication.

This includes development of displays, chips, production, software (ios, android), apps etc.

AI is building upon this speed and only has software and specialized hardware and the AI we are currently building is already optimizing itself (copilot etc.).

And the output is not something 'new' which changes a few things like navigation, post service, banking but basically/potentially everything we do (including the physical world with robots).

If this is any indication, its very realistic to assume that the next 5-15 years will be very very interesting.


I agree with you, it's true. I guess I should have been more precise in saying that AI takes away a much greater proportion of creative work. But of course, horse driving, handwriting, and other such things still involved a level of creativity in them, which is why in turn I am against most technology, especially when its use is unrestricted and unmoderated.


I'm highly sympathetic to your perspective, but it would be hypocritical of me to entirely embrace it. Hitting the spacebar just gives me so much joy, the syncopated negative space of it that you don't get writing by hand, the power of typing "top" and getting a birdseye view of your system, that I can't really begrudge the next generation of computing enthusiasts getting that same joy of "simply typing an idea" and getting back a coherent informed response.

I personally lament the loss of the experience of using a computer that gives the same precision that you'd expect from a calculator, but if I'm being honest, that's been slowly degenerating even without the addition of AI.


Those are tools humans use to created output directly to speed up a process. The equivalent argument for AI would be if the typewriter wrote you a novel based on what you asked it to write, and then everyone else's typewriter might create the same/similar novel if it's averaging all of the same human data input. This leads to a cultural inbreeding of sorts since the data that went into it was curated to begin with.

The real defining thing to remember is that humans don't need AI, but AI needs human data.


Humans also need human data. You might be better than I, but at least for myself, I know that I am just a weighted pattern matcher with a some stochasticity mixed in.

I don't think the idea of painstakingly writing out a book, and then having a printing press propagate your book so that all can easily reproduce the idea in their own mind, is so very different.

I think this is why the real conversation here is about the lossiness of the data, where the "data" is conveying a fundamental idea. Put another way, human creativity is iterative, and the reason we accept "innovative" ideas is that we have a shared understanding of a body of work, a canon, and the real innovation is taking the canon and mixing it up with one new innovation.

I'm not even arguing that AI is net good or bad for humanity. Just that it really isn't so different than the printing press. And like the Bible was to the printing press, I think the dominant AI model will greatly shape human output for a very long time, as the new "canon" in an otherwise splintered society, for good and for bad.

Proprietary models, with funding and existing reach (like the Catholic Church when the Gutenberg press came along), will dominate the mental space. We already have Martin Luther's nailing creeds to the door of that church, though.

Still, writing by hand does still have special meaning, encoding additional information that is not conveyed by printing press. But then as now, that additional meaning is mostly only accessible to those closest to you, that have more shared experiences with you.

I'll accept that there's an additional distinction, though, since layers of communication will be imported and applied without understanding of their context; ideas replaced, filled in, rather than stripped. But let's be honest: every interpretation of a text was already distinct and uniquely an individual's own, albeit likely similar to those that shared an in-group.

AI upsets the balance between producers and consumers, but not in the way that it's easier for more people to be producers, but in this day in age, that there is so little time left to be a consumer when everyone you know can be such a prolific producer.

Edit: typewriters and printing presses also need human data


> Just that it really isn't so different than the printing press.

The part that makes the goals of the AI crowd an entirely different beast from things like the printing press is that the printing press doesn't think for anyone. It just lets people reproduce their own thoughts more widely.


The printing press lets people reproduce other people's thoughts more widely. As to reproducing your own thoughts more widely, this is why I was describing a cultural "canon" as being the foundation upon which new ideas can be built. In the AI world, the "new" idea is effectively just the prompt (and iterative direction); everything else is a remix of the canon. But pre-AI, in order for anyone to understand your new idea, you had to mix it into the existing canon as well.

Edit: to be abundantly clear, I'm not exactly hoping AI can do very well. It seems like it's going to excel at automating the parts of software development that I legitimately enjoy. I think that's also true for other creator-class jobs that it threatens.


Humans/life don't need data. Life survives off of experience and evolutionary pressures. Data is a watered-down/digitized form of experience meant as a replication of that experience, the same way you can hear/analyze music on your computer. It's just usually close enough that most people can't tell the difference. All of that was fed by human "data", which means AI as ultimately a copy of evolutionary pressures that it never went through.

Typewriter/printing presses are for faster propagation or execution. AI in the cultural sense is about replication, hence the Artificial Intelligence tag. Typewriters aren't attempting to replicate or substitute, they are tools like a hammer. They are designed to be operated by humans since they are analog in nature, like your keyboard. AI doesn't need a keyboard, it's operating off our end contributions directly. It cares about the final, digitized form of the novels we feed it, no how we made it or came up with it.

That is the key difference here. It is the same thing when someone creates something based on their own direct experiences versus someone who is simply copying something. It is why AI art for example is increasingly looking bizarre in my opinion: it's completely recycled/fake.


I remember when people used to say similar things about using ASM, and then about the craft of writing things in C instead of managed languages like Java.

At the end of the day most people will only care about how the tool is solving the problem and how cheaply. A cheap, slow, dirty solution today tends to win over a expensive, quick, elegant one next year.

Now there are still some people writing ASM, and a lot of them as a hobby. Maybe in a few years writing code from scratch will be seen in the same way, something very few people have to do in restricted situations, or as a pastime.


Writing code by typing on a keyboard will be just a hobby?

Sure, and who is supposed to understand the code written by AI when we retire? Since writing code by typing on a keyboard will apparently cease to exist, who will write prompts for an AI and put the code together?

Person: Hey AI, build me a website that does a, b and c.

AI: Here you go.

Person: Looks like magic to me, what do I do with all this text?

AI: Push it to Git and deploy it to a web server.

Person: What is a web server? What is a Git?

AI: ... let me google that for you.

Yeah, I'm just not seeing it play down as in the conversation above.


> Sure, and who is supposed to understand the code written by AI when we retire?

Why someone would need to? Do the product/business people who order creating something understand how it is done and what is Git, a webserver etc.? It is based on trust and if you can show the AI system can consistently achieve at least humanlike quality and speed on almost any development task then there is no need to have a technical person in the loop.


So there could never be a new provider or a new protocol because AI wouldn't be able to use them or create them.

You can just make websites on pre-approved list.


> So there could never be a new provider or a new protocol because AI wouldn't be able to use them or create them

On what do you base this? Is there some upper bound to the potential in AI reasoning that bounds it skill to creating anything more complex? I think it is on the contrary - it is humans who are bound by our biological and evolutionary hard limits, the machine is not.


Show me 2 AIs talking to each other and agreeing on a protocol and successfully both implementing it on their side in a way that it works then.


Where did I say that is the current state of its capabilities? My argument was about the future and the perspective on its skills.


If we are writing a scifi novel, sure.

If we mean that the current way of doing things will lead to that… I have strong doubts.


Don't worry, it'll spin up a git repo and an instance for you, as well.

How stable and secure all of this will be though is another question. A rhetorical one.


Have you seen devin the ai developer demo?

Business already doesn't know what security is, they will jump head first into everything which allows them to get rid of those weird developer dudes which they have to cater to and give a lot of money.

I personally would also assume that there might be a new programming language AI will invent. Something faster, more optimized for AI.


Scripted? Did you get access to it? I’ll believe it when I try it hands-on.


I already coded a few times with chatgpt (4). Devin doesn't has to be perfect but its clear (in my opinion) that this will become better and better faster than we think.

GPT-5 will tell us in summer were we at


Presumably, the AI would have access to just do all the git and web server stuff for you.. The bigger problem I see would be if the AI just refuses to give you what you ask for.

Person: I want A

AI: Here's B

Person: No, I wanted A

AI: I'm sorry. Let me correct that... Here's B

.. ad nauseum.

Or alternatively:

Person: <reasonable request>

AI: I'm sorry, I can't do that


> A cheap, slow, dirty solution today tends to win over a expensive, quick, elegant one next year.

I disagree with this platitude, one reason being the sheer scale of the hidden infrastructure we rely on. Just looking databases alone (Postgres, SQLite, Redis etc.) shows us that reliable and performant solutions dominate over others. Many other examples in other fields like operating system, protocol implementations, cryptography.

It might be that you disagree on the basis of what you see in day-to-day B2B and B2C business cases where just solving the problem gets you paid, but then your statements should reflect that too.


There is one fundamental flaw with the AI generated code: unmaintainability and unpredictability.

An attempt to modify the already generated specific piece of code or the program in general will produce an unexpected result. Saving some money on programmers but then losing millions in lawsuits or losing customers and eventually the whole business due to an expected behaviour of the app or a data leak might not be a good idea after all.


Takes like this miss the forest for the trees. The overall point is that automated programming is now a target, just like automating assembly lines became a target back in the day. There will be kinks in the beginning, but once the target is set, there will be a huge incentive to work out the kinks to the point of near full automation


You do realize how predictable an assembly line is though right?


Playing devil's advocate, between compilers and tests, is it really less predictable than some junior developer writing the code?

If you're pushing unreviewed, untested code to production, that's a bigger problem than the quality of the original code


Who reviews and tests the code?

And how do they build the knowledge and skill needed to review and test without being practiced?


Retorts from the business side:

* sounds like next quarters problem

* since everyone is doing it, sounds like it will be society who has to figure out an answer, not me. (too big to fail)

Not joking, I think those are the current defacto strategies being employed.


Really get them going when you mention that "too big to fail" is a logical fallacy.


For some reason 'logical fallacy' just results in frowns and 'Needs Improvement' ratings when used on MBAs. Weird.


Oh, I just mention how dinosaurs were too big to fail; it really helps when the whole team starts making fun of them for saying something stupid like that.


Unmaintainability? You should see the stuff some of my colleagues write. I'm fairly certain GPT could outperform them in the readability/maintainability department at this point.


You're assuming that AI-based code is even that minimally good.

Here's a nice litmus test: Can your AI take your code and make it comply with accessibility guidelines?

That's a task that has relatively straightforward subtasks (make sure that labels are consistent on your widgets) yet is still painful (going through all of them to make sure they are correct is a nightmare). A great job to throw at an AI.

And, yet, throwing any of the current "AI" bots at that task would simply be laughable.


This is the part where I disagree:

> Business logic must always be defined in an unambiguous format

No. That’s only the case because unambiguous ‘business logic’ is what we have to use when trying to explain to rigid, unthinking machines how we want our business to operate. Business processes aren’t naturally rigid. Computerization has just made us think that they have to be.

What LLMs have the potential to do is to replace rigid unthinking machines with fuzzy contextually aware machines. Instead of emulating mindless bureaucrats only capable of operating on the rigid record formats of their strict filing systems, who’ll reject any form that contains a single error, as computer programs do today, future systems with a grasp on fuzzier language might be able to work with more ambiguity.


>Business processes aren’t naturally rigid. Computerization has just made us think that they have to be.

I would argue it is more finance, accounting and management theory that made it rigid. Perhaps a little HR/legal as well. I remember the days when customer service was allowed to actually help you and it was nice.


Whole article is a bit rushed. Seems like a knee jerk reaction based off the tech demo of “Devin AI”.

I’ll admit the tech demo was impressive on the face of it. But it had all the flags showing it was a well rehearsed demo (think Steve Jobs and original iPhone debut) or even simulated images and videos. For all we know the code was written well in advance and recorded by people then tech ceo Steve Job’d the shit out of that performance.

I notice the ai is still in preview and locked behind gate


Gone are the days of hearing our jobs were going to be outsourced to India. Now AI is going to do our jobs.

I know quite a few people in my circle who never got into tech because they believed all this outsourcing crap. After 20 years I can say without a doubt I am much better off than any of them.

How many now will never get into tech due to AI?


Its an easily enough conducted experiment to find out for sure, first hand here. We're all developers here right? Get out some previous project ticket (you do have those right?) and put your Manager hat on. Take a whole big step off the LLMs plate and convert your ticket input to prompts directing the AI to generate a certain subsection of code for a particular requirement. Do this for all the requirements until every use case is satisfied with some code to execute it.

Now, get all that code to compile, run at all, run correctly then finally run optimally. Hell, get it to pass a basic test case. Use the LLM for all of this. Feed the bugs to the LLM and ask for a correcting fix of the condition etc.

Even simpler, pull a ticket from your next sprint and assign it to a dev with the instructions to use the LLM as entirely as possible for the task.

The results of this experiment will be informative and unless you're building a demonstration API endpoint that reverses binary trees or something else trivial, then you will cease to be worried about AGI taking over anyones job in the near to medium term. Try it FIRST - THEN flame me if you still feel like it (if you're not still busy fucking with prompts to build code you could have built in less time yourself)?

To be clear I'm a proponent of this kind of technology and I use it daily in my dev/ops/everything work. Its faster and more accurate than reading docs to figure something out. Im never asking GPT to "do" something novel as much as Im asking it to summarize something it knows and Im setting a context for the results. I can't tell you the last time I read a Man page for some Bash thingy - Its just fine to ask GPT to build me a Bash thingy that does XYZ.

Of note, Ive asked GPT4 for some very specific things lately re-configurations (I have 3 endpoints in different protocols Id like to proxy via Nginx and Id like to make these available to the outside world with Ngrok - tell me the configurations necessary to accomplish this). It took me quite a bit of mucking around to get it working and the better part of the day to be satisfied with it. Im pretty confident a suitable intern would have had difficulty with it too.

AI is great and ever increasing in its abilities but we're just not there yet - we'll get ever closer as time goes on but we'll never quite get there for general purpose development-in-whole-by-AI. A line that continually approaches a given curve but does not meet it at any finite distance, thats an asymptotical relationship and I believe that describes where we are very well today.


> I assumed were still many months away from this happening, but was proved wrong with the Devin demo - even though it can only perform simple development tasks now, there’s a chance that this will improve in future.

Doesn't Devin run using GPT-4? And as such, with all generative AI technologies, its performance is subject to the common challenges faced by these systems.

Mainly the transformer model, which is the foundation of GPT-4, is known for its linear scalability. This means that as more computational resources are allocated, its performance improves. However, this scalability is subject to diminishing returns, reaching a point where additional resources yield minimal improvements. This theoretical "intelligence" limit suggests that while continuous advancements are possible, they will eventually plateau.

The future of software development will continue to involve human software engineers until we achieve true Artificial General Intelligence. Regardless of when or if AGI becomes a reality, your skills and expertise in your niche domain will remain valuable assets. While companies may leverage AI-powered software engineering tools to augment their workforce and in effect replace you, you as a skilled professional can do the same.

If you possess a deep understanding of your core domain and a passion for building useful products, you can leverage AI software engineering tools, AI-powered design assistants, and AI-driven marketing solutions to launch new startups more efficiently and with less capital investment, especially in the realm of software-centric businesses.

So, the way I see it, it is the businesses that need to be afraid, if AI becomes capable enough to start replacing their workers, as it will also make it easier for most software engineers and product managers to build competing products in their areas of specialization.


It's a good overview, but I think there's one important aspect that's not discussed.

We're looking at AI competing with the jobs that programmers do today, but it's likely that these new AI tools will change software itself.

I mean, why have a complicated UI with design/validation/etc when you can just tell your phone you want a plane ticket to Paris tomorrow ? I'm just going to guesstimate that at least half of the apps we use today can be done without a UI or with a very different type of UI+natural language.

Add AI agents that are fine tuned on all your personal data in real time (eg. photos you take, messages you send, etc) which will end up knowing you better than your mom. In a company setting, the AI will know all your JIRA tickets and Slack/Teams conversations, e-mails and so on.

On the backend, instead of API endpoints, you'll have just one - where the AI asks you for data piece by piece, while the client AI provides it. No need to program this, the AIs can just figure it out by themselves.

Definitely interesting times, but change is coming fast.


This is similar to my thoughts, “code” is for humans. AI does need a game engine or massive software, some future video game just needs to output the next frame and respond to input. Little to no code required.


Underrated comment! I agree, it's like the LLM becomes all of the logic, all of the code. I guess that's less computationally efficient though for some simple things.. for now!


This indeed may be the future, and I'll probably be a grumpy old man complaining about it. What was once a form will be replaced by a system needing to connect to a server running a 1T parameter model requiring specialized hardware and using 1e6 times the power.


That got me thinking that these things will be able to first learn and then write the optimised code to avoid the energy usage. Think 'muscle memory' but for AI interacting with external world (or another AI)...


I believe this is where we’re headed. AI replacing much of the software itself. You don’t need a website to manage your rental properties, another one for ordering food, etc.

However it’s not as imminent as “just fine tune it on personal data”.


Can’t GPT—4 replace a middle manager today already? Why aren’t useless ticket pushers afraid?

“Reformulate this problem statement for a programmer to implement” straight from executive’s mouth and then “given these status updates, tell me if we are there and if anyone is trying to bullshit” is a perfect thing for an LLM.


I'm sure that there's a bunch of CEOs, drooling over the thought of firing all their engineers, and doing everything with AI.

That may actually work.

For a year or two.

Then, the engineers that learn to leverage AI, and add their own skills, will knock those folks into the skip.

Personally, I'm having a difficult time, integrating AI stuff into my work; but that's more because the stuff I do, and the platforms I use, are currently not a particularly rich AI substrate. I suspect that will rapidly change, and I'm looking forward to seeing what I can do.


I think a lot of issues with language model coding come down to three issues.

The first is the models themselves:

1. A lack of longer context, i.e. white boarding or other means of breaking down problems into components, being able to focus context in and out, etc. This is a direction models are going to go.

Just like us, they are going to benefit from organizational tools.

The other two are just the need for normal feedback, like we developers get:

2. They need a code, run, evaluate, improve cycle. With a hard feedback cycle today's models do much better.

They quickly respond to iterative manual feedback. Which is just an inefficient form of direct feedback.

3. A lack of multiple perspectives. Groups of models competing and critiquing each other on each task should improve results for the more tricky problems.

--

I personally think it is astounding that models generate somewhat reasonable, somewhat buggy code on their first pass.

Just like I do!

I don't know any coder that doesn't repeatedly tweak code they just wrote due to feedback. The fact that models can output first pass code so quickly without feedback today, suggests they are going to be very very good once they can test their own code, and converge on a solution informed by different attempts by different models.

A group of models can also critique each others' code organization for simplicity, readability and maintainability.


The "Framework: Outsourced Software Development" image confuses me. What do the green/yellow/lavender circles represent in each 6x6 grid of circles?


This is how I interpreted it:

Green - non engineers employed by the "the company"

Yellow - In house engineers, or engineers employed by the the company

Lavender - Vendor engineering external to the company


Ah, that makes sense, thanks. A legend or even sentence explaining that somewhere in the article would've been great.


The highest level would be like delegating part of your project or the entire project to a developer. These “AI coders” would take in the requirements, write the code, fix errors and deploy the final product to production.

About the "write the code" part - what code is that? Machine code, Assembly, JS? Who creates the languages, compilers, interpreters and the plethora of tools required to operate these and deploy to production?


This is where I’m stuck. The ecosystem we have today has evolved over >40y in lockstep with new hardware and technology, in a feedback loop of technological development. It is pretty resilient (from an evolutionary perspective). It includes layers of abstraction that hide enough complexity for humans to use it, but an AI doesn’t need this crutch; if we delegate the code writing responsibilities to it then what happens to this ecosystem? Purely from an economic perspective it is likely that in the limit we narrow down to the most efficient way to do this, which will be a language designed for AI, and dollars to donuts it’ll be owned by Google (et al). Will Python4 (etc) whither and die on the vine, due to lack of investment/utility? Then what happens to technology?


In the limit, languages will be designed by AI for AI and the same goes for hardware. This assumes that languages higher then machine code will be needed which is not obvious.

Will this be acheived in a few years or a a few decades or never I can't tell.


IMO LLMs are going to be enhancers for experienced programmers.

Although I worry that a lot of junior programming jobs will simply vanish. Software is facing the same headwinds that a lot of low-level office jobs faced, they were first shipped overseas and then automated away.

A lot of software development which is CRUD applications is going to be disrupted.

Bespoke software that requires specialized skills is not going away anytime soon.


I think the expanded pie argument is probably valid to an extent. If costs go down, demand goes up. Also, I suspect new entrants to the field will start to slow as it won't be as lucrative as it once was. This is exactly what happened after outsourcing threatened wages in the 00's - followed by insanely high compensation in the 10's.


The expanded pie argument only works when there is something that cannot be done - or more cost effectively done - by the machine.

The argument in the article is that that something for SW developers is increasingly going to be to feed the machine with requirements.

I have no idea how this will pan out, but from where I am sitting the ability to converse and understand business users, even if they stay human ones, better than current sw devs do doesn't feel like much of a moat, especially with the machine's evolving io modalities.


Big question: how to represent programs when some kind of AI-type system is doing most of the work. Code is all "what" and no "why". Intention is represented, if at all, in the comments. If you use an AI to modify code, it needs intention information to do its job. Where will that information come from?


This suggests something. Knuth's old "literate programming", which is a formalized comment-heavy style. That might be useful for AI-generated code, to capture the "why". The purpose is to provide the info needed as part of the prompt when an LLM next works on the code again.

What's needed is a way to write down what the military calls "commander's intent".[1] Commander's intent reads like "why", but its real purpose is to provide guidance as to what's important when things don't go as planned.

[1] https://www.armyupress.army.mil/Portals/7/military-review/Ar...


> Before the advent of these models, the main argument against automating these tasks was that machines can’t think creatively.

Really? For me it's always been, and hasn't changed since LLMs have been released, that it's much harder to explain in enough details what you want to an AI and get the correct result than to actually code the correct result yourself.

Prompting is like a programming language but keywords have a random outcome, it's 100% inferior to simply writing the code itself.

LLMs will help with tooling around writing the code, like static analysis does, but it'll never ever replace writing code.


I honestly wouldn't mind if the type of software where you talk to business users and implement their random thoughts to automate some banal business process goes away. It generally seems low value anyway.

We should still have software products however.


For people worried about jobs. The last diagram showing the big increase in overall market size is the big 'leap in faith' about the future.


", I believe there would still be an underlying formal definition of the business logic generated in the backend"

Not just business logic. Technical one too, such as "proove that this critial code can never run into an infinite loop"

AI could very well be the revolution needed to bring theorem proover and other formal proof based languages ( coq, idriss, etc) to the mainstream mass of developers.


Only if they can understand the proofs first.

However, LLMs aren't capable of writing proofs. They can only regurgitate text that looks like a proof by training on proofs already written. There's no reasoning behind what they produce. So why waste the time of a human to review it?

If the provers' kernel and automation make the proof pass but it makes no sense, what use is it?

People write proofs to make arguments to one another.


"However, LLMs aren't capable of writing proofs."

LLMs != AI.

I'm not worried about LLM's impact on my career. In their current form they're nothing more than a trap, who will suck in anyone foolish enough to grow a dependence on them, then destroy them, both personally and corporately. (Note difference between "using them" and "grow a dependence on them".) Code that no human can understand is bad enough, code that no human has ever understood is going to be even worse as it starts piling up. There are many characteristic failures of software programming that LLMs are going to suffer from worse than humans. They're not going to be immune to ever-growing piles of code. They're not going to be immune to exponentially increasing complexity. They'll just ensure that on the day you say "OK, now, please add this new feature to my system" and they fail at it, nobody else will be able to fix it either.

If progress froze, over the next few years the programming community would come to an understanding of the rather large amounts of hidden technical, organizational, and even legal and liability debt that depending on LLMs creates.

But LLMs != AI. I don't guarantee what's going to happen after LLMs. Is it possible to build an AI that can have an actual comprehension of architecture at a symbolic level, and have some realistic chance of being told "Take this code base based on Mac OS and translate it to be a Windows-native program", and even if it has to chew on it for a couple of weeks, succeed? And succeed like a skilled and dedicated human would have, that is, an actual, skillful translation, where the code base is still comprehensible to a human at the end? LLMs are not the end state of AI.


I didn't mean to have the AI write the proof. On the contrary. Humans would be responsible for writing them, to ensure that the code generated by the AI meet certain constraints (safety for example).


If the proof make it past the validation it’s a valid proof and makes sense by definition.


It means your proof proves something.

Whether it's something you care about, or even something that should be true at all is out of scope.


It's almost always easier to understand theorems than proofs. People could be writing down properties of their programs and the AI could generate a proof or produce a counterexample. It is not necessary to understand the proof in order to know that it is correct.

At least in principle. Practically, any system as least as powerful as first-order logic is undecidable, so there can never be any computer program that would be able to do this perfectly. It might be that some day, an AI could be just as good as a trained mathematician, but so far, it doesn't look like it.


> It might be that some day, an AI could be just as good as a trained mathematician, but so far, it doesn't look like it.

One of the on-going arguments in automated theorem proving has been this central question: would anyone understand or want to read a proof written by a machine?

A big part of why we write them and enjoy them is elegance, a rather subjective quality appreciated by people.

In computer science we don't tend to care so much for this quality; if the checker says the proof is good that's generally about as far as we take it.

But endearing proofs that last? That people want to read? That's much harder to do.


I don't think anyone wants elegant proofs for their software's correctness. As long as the code is proven correct, programmers would be satisfied.


I had mentioned that. This is true today when software developers are writing proofs and why automation is useful.

You would still need to be able to read the proof that a computer generated and understand what it contains in order to judge whether it is correct with respect to your specifications. So in some sense elegance would be useful since the proof isn’t written by a human but has to be understood by one.

If the input here is still plain language prompts… then we’ll have to assume that we eventually develop artificial cognition. As far as I know, there isn’t a serious academic attempting this, only VC’s with shovels to sell.


Meanwhile, the shovels learned to solve mathematical problems of the international math olympiad.

https://deepmind.google/discover/blog/alphageometry-an-olymp...


Deep learning also learned how to play Go, not terribly interesting.


This is what I meant, thanks for clarifying.

I might also add that, “they,” on the first line are software developers. Most are not trained to read proofs. Few enough can write a good test let alone understand when a test is insufficient evidence of correctness.

I had to learn on my own.

Even if LLMs could start dropping proofs with their PRs would they be useful to people if they couldn’t understand them or what they mean?


Yeah, people fixated on the meaning of "makes no sense" to evade accepting that the proofs LLMs output are not useful at all.

On a similar fashion, almost all LLM created tests have negative value. They are just easier to verify than proofs, but even the bias into creating more tests (taken from the LLM fire hose) is already harmful.

I am almost confident enough to make a similarly wide claim about code. But I'm still collecting more data.


> Apart from what the AI model is capable of, we should ask think in terms of how accurate the solutions are. Initially these models were prone to hallucinations or you need to prompt them in specific ways to get what you want.

Today's prompts are yesterday's intermediate languages & toolkits. Today's hallucinations are yesterday's compiler bugs.


Folks see it wrong. Is not about human vs machine software development.

Is about allocating the brightest humans to the most productive activities.


Keep in mind there are many more humans that aren't as bright. If there are fewer and fewer ways for them to contribute then they'll find other ways to get what they need.


A reasonable, albeit lossy, gloss on "bright" is "someone who earns their living through intellectual labor". Clearly there are mathematicians who become potato farmers, but what we don't see is movement in the other direction: call it sufficient but not necessary.

The corollary being that those people are already earning their bread doing things which an artificial intelligence can't replicate. I've seen many vast overestimates of how many of these jobs could even be replaced with a Jetsons robot, and we're nowhere close to deploying anything vaguely resembling one of those.


And as long as robots are where they are. There a lot of jobs in hospitality, and healthcare available. In average, life quality will improve.


Making sure those productive minds sell ads rather than do science. Value creation!


Yes. The distribution of information is indeed a complex problem, as graph theory shows us.

Regarding science is about one standard deviation further than software development in complexity. It's not necessarily a trade off. On the contrary, software applies scientific breakthroughs.


My problem isn't the graph or even its particular failings (vanishing gradients in science being one of many), it's the ideological opposition to regularization.



Gives us poor plebs some context please.


Now combine this approach with concepts of no-code, which takes the task of writing lines of code out of the path from concept to production. An AI could easily create instructions that could be fed to bubble to create an app. Either bubble is already working on this or someone else is and bubble will miss the boat.


I typically auto generate about 90% of the code needed for a typical business application without using AI. That still leaves plenty of work for me to specify what is required to be generated by the code generator and translating/negotiating with customers. Customers does not want to talk to an AI.


I'm quite unconvinced. If anyone can get a LLM to fix the geometry kernel bugs in solvespace I'll quit engineering today. From what I've seen, just understanding the intent of the algorithms is infinitely beyond an LLM even if explained by a human. This is not going to change any time soon.

Pull Requests accepted.


I foresee we'll soon have an "Organic Software", "Software Made By Humans" seal of approval.


Large language models could be used to configure abstract syntax trees for desired behaviour, since much of devops and software architecture is slotting configuration together.

Kubernetes, linkerd, envoy, istio, cloudformation, terraform, cdk.

We're just configuring schedules and binpacking computation and data movement into a calendar.

The semantic space of understanding behaviour is different to understanding syntax.


That’s the easiest part of the job, yes. That’s why there are so many tutorials on getting that stuff set up


Unfortunately it's unmaintainable and cannot be transformed without full time DevOps or SREs.


Why just "software development"? People should really think of the endgame. Everything will change.

I mean, what business, science, [popular] art, and even politics are all essentially? Just a clever optimization process against the real world. Throw ideas/hypotheses on the wall and see what sticks.

Computers are very good at optimization! No reason to believe it won't all be solved in N years, with like 100x more efficient compute and a couple more cool math tricks.

Give future "GPT-10" interfaces for interacting with the real world, and it could do everything — no human supervision/"prompting" needed at all. Humans would only be an unneeded bottleneck in that optimization process.

There will be big companies consisting only of their founder and no one else. And I am not sure there will many such companies (so like "everyone could have one") — no, it is more likely that due to economies of scale, there will be only a few megacorps, owning all the compute.

What we should worry about is how to avoid extreme wealth disparity/centralization that seems imminent in that future...


The time my job as a software engineer will be obsoleted by AI is the day singularity happens, and then we will have much much bigger problems. Anything less is just so far from it, that it is a laughable excuse of a code generator.


I think we need to have human-level AGI before "AI developers" are a possibility, and it'll probably take us a lot longer to get there than most people imagine.

Remember, the job of a developer is not just (or even primarily) writing code. It's mostly about design and problem solving - coding is just the part after you've nailed down the requirements, figured the architecture and broken the implementation down into pieces that can be assigned (or done by yourself). Coding itself can be fun, but is a somewhat mindless task (esp. for a senior developer who can do this in their sleep) once you've got the task specified to this level where coding can actually begin. Once the various components are coded, they can be built and unit tested (may require custom scaffolding, depending on the project), debugged and fixed, and then you can integrate them (maybe now interacting with external systems, which may be a source of problems) and perform system test, and debug and fix those issues. These aren't necessarily all solo activities - usually there's communication with other team members, maybe people from external systems you're interacting with, etc.

So, above process (which could be expanded quite a bit, but this gives a flavor) gets you version one of a new product. After this there will typically be bugs, and maybe performance issues, found by customers which need to be fixed (starting with figuring out which component(s) of the system are causing the issue), followed by regression tests to make sure you didn't inadvertently break anything else.

Later on in the product cycle there are likely to be functional change requests and additions, which now need to be considered relative to the design you have in place. If you are smart/experienced you may have anticipated certain types of change or future enhancement when you made the original design, and the requested new features/changes will be easy to implement. At some point in the product's lifetime there will likely eventually be changes/features requested that are really outside the scope of flexibility you had designed in, and now you may have to refactor the design to accommodate these.

As time goes by, it's likely that some of the libraries you used will have new versions released, or the operating system the product runs on will be updated, or your development tools (compiler, etc) will be updated, and things may break because of this, which you will have to investigate and fix. Maybe features of the libraries/etc you are using will become deprecated, and you will have to rewrite parts of the application to work around this.

And, so it goes ...

The point of all this is that coding is a small part, and basically the fun/easy part, of being a developer. For AI to actually replace a developer it would need to be able to do the entire job, not just the easy coding bit, and this involves a very diverse set of tasks and skills. I believe this requires human-level AGI.

Without AGI the best you can hope for, is for some of the easier pieces of the process to be addressed by current LLM/AI tech - things such as coding, writing test cases, interpreting compiler error messages, maybe summarizing and answering questions about the code base, etc. All useful, but basically all just developer tools - not a replacement for the developer.

So, yeah, one day we will have human-level AGI, and it should be able to do any job from developer to middle manager to CEO, but until that day arrives we'll just have smart tools to help us do the job.

Personally, even as a developer, I look forward to when there is AGI capable of doing the full job, or at least the bulk of it. I have all sorts of ideas for side projects that I would like an AGI to help me with!


There will always be more novel code to write that the the AI hasn't learned.


I don’t do much on internet, i don’t have facebook nor other various socials, just HN, or read simple docs, or code. Otherwise I’m AFK. Now since October Ive visited 4 sites that were not simple reading stuff, that I had to interact or do various level of meaningful stuff, and all 4 had issues and were expecting me to call support etc. I hope in the future of software development there are more automated tests and less hiring managers


Software Developers (humans) are the horses, the models are the mechanical cars, and the analysts and business experts are the drivers. Software's only job is to empower the end user, and that's exactly what these models are doing. We're going to see a massive paradigm shift in how software technology is built in the next decade.


Maybe.

Currently, these models seem to be useful but produce incorrect code sometimes, right? So coders might need to still exist. And they do they actually work well for whole projects, or just snippets of code (I’m sure increasing project size will be an area of rapid improvement). Also the models are trained on existing code, will the models need extra data to train them?

The analogy would fit if we still needed horses to drive manual cars or to help navigate. Or if cars were an interpolation of horse behavior and we studied horses to improve the operation of cars.


More like the developers are the drivers, the dev environment is the horse, and the analysts and business folks are the passengers. And how they love to backseat drive…


Sam Altman: The nature of programming will change a lot. Maybe some people will program entirely in natural language.

from min 1:29:55 — Sam Altman & Lex Fridman’s new interview, incl questions about AGI

https://youtu.be/jvqFAi7vkBc?si=tZXNdVnOSk1iWX34

I agree but there will likely be demand for experts to help direct AI to create a better solution than otherwise and even write pseudocode and math formulas sometimes. The key is for the expert to understand the user’s needs better than AI by itself and ideally better than the user themselves.

Many/most software engineers could act more like art directors rather than artists, or conductors instead of musicians.


Software Engineering as a professional discipline without any specialization (and writing CRUD applications or websites don't count in my opinion) never made sense to me. Software engineers producing web applications are modern day equivalent of production line workers who were mostly replaced by automation. So, AI will likely replace such software engineers.

In contrast, automation and technology did not replace "real" engineers. If anything, it made their job more productive.

All this to say that a generic software engineer with no specialized skills might be replaced by other professionals (engineers, accountants, etc.) who might be able to leverage AI to create production quality software.


Gonna go ahead and strong disagree. Dismissing a huge portion of engineers as "production line workers" whose work takes no thought or creativity is incredibly reductive and makes me wonder if you've ever tried doing their job.

Web applications are a category of software with just as much variance in their complexity as anything else. Are mobile apps and desktop apps also trivial? What's the difference? And if none of mobile apps, desktop apps, or web apps require "real engineering," then what does? Presumably games are just "repetitious production-line-like usage of a game engine." So what's real? The Linux kernel? Are the only real engineers those writing never-before-seen C code?

This reads to me like a self-serving humblebrag setting yourself aside as a real engineer, not like those other engineers.


You are assuming too much about what I think of myself. I am a software engineer and have been so for a long time. And I've written my fair share of UIs and apps.

If all the things we do are so varied and creative, then we shouldn need to worry about AI replacing our jobs. But that is not what I am reading in this post. What software engineering jobs will AI replace then?

In fact, what I wrote suggests that specialization is a way to make software engineering save itself from the AI threat by making the task of the software engineer more irreplaceable. No matter what you think about its complexities, making web sites is not it.


> If all the things we do are so varied and creative, then we shouldn need to worry about AI replacing our jobs.

Your argument is built on this flawed premise. ChatGPT can generate poetry and stories. Does that mean writing poetry and stories is neither varied or creative?

The fact that AI can do something doesn't diminish its value when it comes from the hands of a human.

Machines in factories can make all sorts of household decorative goods too, but there's a reason people still buy hand-made things.


> The fact that AI can do something doesn't diminish its value when it comes from the hands of a human.

Sure. Morally it doesn't. Financially, it certainly does. This is essentially in the definition of automation.

Turning software engineers into artisans will not save %95 of software engineers.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: