This harkens back to the waterfall vs agile debates. Ideally there would be a plan of all of the architecture with all the pitfalls found out before any code is laid out.
In practice this can’t happen because 30 minutes into coding you will find something that nobody thought about.
In the micro, sure. In the macro, if you are finding architecture problems after 30 minutes, then I’m afraid you aren’t really doing architecture planning up front.
Depends on what you're building. If it's another crud app sure, but if its something remotely novel you just can't understand the landscape without walking through it at least once.
> if its something remotely novel you just can't understand the landscape without walking through it at least once
Sure you can. Mapping out the unknowns (and then having a plan to make each one knowable) is the single most important function of whoever you have designing your architecture.
Up-front architecture isn't about some all-knowing deity proclaiming the perfect architecture from on high. It's an exercise in risk management, just like any other engineering task.
> there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don't know we don't know.
if you're spending anywhere near as many engineering hours "getting code to work" as you're spending "thinking" then something is wrong in your process
I’m not following. It seems straightforward enough, and consistent with both charts, that a dramatic speedup in coding yields a more modest improvement in overall productivity because typing code is a minority of the work. Is your contention here that the LLM not only documents, but accelerates the thinking part too?
It does, for sure, and I said that in my comment, but no, the point I'm making is that this article isn't premised on thinking being an order of magnitude more work than coding. See: first chart in article.