>First, skilled engineers using LLMs to code also think and discuss and stare off into space before the source code starts getting laid down
Yes, and the thinking time is a significant part of overall software delivery, which is why accelerating the coding part doesn't dramatically change overall productivity or labor requirements.
I don't like the artificial distinction b/w thinking and coding. I think they are intimately interwoven. Which is actually one thing I really like about the LLM because it takes away the pain of iterating on several different approaches to see how they pan out. Often it's only when I see code for something that I know I want to do it a different way. Reducing that iteration time is huge and makes me more likely to actually go for the right design rather than settling for something less good since I don't want to throw out all the "typing" I did.
Yeah these days I often give it a zero shot attempt, see where things go wrong, reset the state via git and try again. Being able to try 2-3 prototypes of varying levels of sophistication and scope is something I've done in the past manually, but doing it in an hour instead of a day is truly significant, even if they're half or a quarter of the fidelity I'd get out of a manual attempt.
Honestly, even if I did it that way and then threw it all away and wrote the whole thing manually it'd be worth using. Obviously I don't, because once I've figured out how to scope and coach to get the right result it'd be silly to throw it away, but the same value derives from that step regardless of how you follow it up.
This harkens back to the waterfall vs agile debates. Ideally there would be a plan of all of the architecture with all the pitfalls found out before any code is laid out.
In practice this can’t happen because 30 minutes into coding you will find something that nobody thought about.
In the micro, sure. In the macro, if you are finding architecture problems after 30 minutes, then I’m afraid you aren’t really doing architecture planning up front.
Depends on what you're building. If it's another crud app sure, but if its something remotely novel you just can't understand the landscape without walking through it at least once.
> if its something remotely novel you just can't understand the landscape without walking through it at least once
Sure you can. Mapping out the unknowns (and then having a plan to make each one knowable) is the single most important function of whoever you have designing your architecture.
Up-front architecture isn't about some all-knowing deity proclaiming the perfect architecture from on high. It's an exercise in risk management, just like any other engineering task.
> there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don't know we don't know.
if you're spending anywhere near as many engineering hours "getting code to work" as you're spending "thinking" then something is wrong in your process
I’m not following. It seems straightforward enough, and consistent with both charts, that a dramatic speedup in coding yields a more modest improvement in overall productivity because typing code is a minority of the work. Is your contention here that the LLM not only documents, but accelerates the thinking part too?
It does, for sure, and I said that in my comment, but no, the point I'm making is that this article isn't premised on thinking being an order of magnitude more work than coding. See: first chart in article.
Yes, and the thinking time is a significant part of overall software delivery, which is why accelerating the coding part doesn't dramatically change overall productivity or labor requirements.