I'm not sure that there is an endpoint, only a continuation of the transitions we've always been making.
What we've seen as we transitioned to higher and higher level languages (e.g., machine code → macro assembly → C → Java → Python) on unimaginably more powerful machines (and clusters of machines) is that we took on more complex applications and got much more work done faster. The complexity we manage shifts from the language and optimizing for machine constraints (speed, memory, etc.) to the application domain and optimizing for broader constraints (profit, user happiness, etc.).
I think LLMs also revive hope that natural languages (e.g., English) are the future of software development (COBOL's dream finally be realized!). But a core problem with that has always been that natural languages are too ambiguous. To the extent we're just writing prompts and the models are the implementers, I suspect we'll come up with more precise "prompt languages". At that point, it's just the next generation of even higher level languages.
So, I think you're right that we'll spend more of our time thinking like product managers. But also more of our time thinking about higher level, hard, technical problems (e.g., how do we use math to build a system that dynamically optimizes itself for whatever metric we care about?). I don't think these are new trends, but continuing (maybe accelerating?) ones.
I don't think COBOL's dream was to generate enormous amounts of assembly code that users would then have to maintain (in assembly!) and producing differently wrong results every time you ran it.
It may not have been the dream, but the reality is many COBOL systems have been binary-patched to fix issues so many times that the original source may not be a useful guide to how the thing actually works.
> But also more of our time thinking about higher level, hard, technical problems (e.g., how do we use math to build a system that dynamically optimizes itself for whatever metric we care about?).
It’s likely that a near-future AI system can suggest suitable math and implement it in an algorithm for the problem the user wants solved. An expert who understands it might be able to critique and ask for a better solution, but many users could be satisfied with it.
Professionals who can deliver added value are those who understand the user better than the user themselves.
This kind of optimization is what I did for the last few years of my career, so I might be biased / limited in my thinking about what AI is capable of. But a lot of this area is still being figured out by humans, and there are a lot of tradeoffs between the math/software/business sides that limits what we can do. I'm not sure many business decision makers would give free rein to AI (they don't give it to engineers today). And I don't think we're close to AI ensuring a principled approach to the application of mathematical concepts.
When these optimization systems (I'm referring to mathematical optimization here) are unleashed, they will crush many metrics that are not a part of their objective function and/or constraints. Want to optimize this quarter's revenue and don't have time to put in a constraint around user happiness? Revenue might be awesome this quarter, but gone in a year because the users are gone.
The system I worked on kept our company in business through the pandemic by automatically adapting to frequently changing market conditions. But we had to quickly add constraints (within hours of the first US stay-at-home orders) to prevent gouging our customers. We had gouging prevention in before, but it suddenly changed in both shape and magnitude - increasing prices significantly in certain areas and making them free in others.
AI is trained on the past, but there was no precedent for such a system in a pandemic. Or in this decade's wars, or under new regulations, etc. What we call AI today does not use reason. So it's left to humans to figure out how to adapt in new situations. But if AI is creating a black-box optimization system, the human operators will not know what to do or how to do it. And if the system isn't constructed in a mathematically sound way, it won't even be possible to constrain it without significant negative implications.
Gains from such systems are also heavily resistant to measurement, which we need to do if we want to know if they are breaking our business. This is because such systems typically involve feedback loops that invalidate the assumption of independence between cohorts in A/B tests. That means advanced experiment designs must be found that are often custom for every use case. So, maybe in addition to thinking more like product managers, engineers will need to be thinking more like data scientists.
This is all just in the area where I have some expertise. I imagine there are many other such areas. Some of which we haven't even found yet because we've been stuck doing the drudgery that AI can actually help with. [cue the song Code Monkey]
This made me laugh out loud. Python is not a step up from Java in my opinion. Python is more of a step up from BASIC. It's a different evolutionary path. Like LISP.
The increase in productivity, we can all agree on, but a non-negligible portion of HN users would say that each one of those new languages made programming progressively less fun.
I think where people will disagree is how much productivity those steps brought.
For instance I think the step from machine code to macro assembler is bigger than the step from a macro assembler to C (although still substantial), but the step from C to anything higher level is essentially negligible compared to the massive jump from machine code to a 'low level high level' language like C.
So many other things happened at the same too, so it's sometimes hard to untangle what is what.
For instance, say that C had namespaces, and a solid package system with a global repo of packages like Python, C# and Java have.
Then you'd be able to throw together things pretty easily.
Things easily cobbled together with Python often aren't attributable to Python the language per se, but rather Python, the language and its neat packages.
Python is a step backwards in productivity for me compared with typed languages. So no I don't think we all agree on this. You might be more productive in Python but that's you not me.
What we've seen as we transitioned to higher and higher level languages (e.g., machine code → macro assembly → C → Java → Python) on unimaginably more powerful machines (and clusters of machines) is that we took on more complex applications and got much more work done faster. The complexity we manage shifts from the language and optimizing for machine constraints (speed, memory, etc.) to the application domain and optimizing for broader constraints (profit, user happiness, etc.).
I think LLMs also revive hope that natural languages (e.g., English) are the future of software development (COBOL's dream finally be realized!). But a core problem with that has always been that natural languages are too ambiguous. To the extent we're just writing prompts and the models are the implementers, I suspect we'll come up with more precise "prompt languages". At that point, it's just the next generation of even higher level languages.
So, I think you're right that we'll spend more of our time thinking like product managers. But also more of our time thinking about higher level, hard, technical problems (e.g., how do we use math to build a system that dynamically optimizes itself for whatever metric we care about?). I don't think these are new trends, but continuing (maybe accelerating?) ones.