You're missing the step where I have to articulate (and type) the prompt in natural language well enough for the tool to understand and execute. Then if I fail, I have to write more natural language.
You said just the same in another of your posts:
> if you can begin to describe the function well
So I have to learn how to describe code rather than just writing it as I've done for years?
Like I said, it's good for the small stuff, but bad for the big stuff, for now at least.
> So I have to learn how to describe code rather than just writing it as I've done for years?
If we keep going down this path, we might end up inventing artificial languages for the purpose of precisely and unambiguously describing a program to a computer.
Exactly my point, you don't ask the LLM to give you the whole function. That would be too much English work, because that means you need to write down the contract of the function in concrete and/or list(list of lists).
You ask it to give you one block at a time.
iterate over the above list and remove all strings matching 'apple'
open file and write the above list
etc etc kind of stuff.
Notice how the English here can be interpreted only way, but the LLM is now a good intelligent coding assistant.
>>I think in code, I'd rather just write in code.
Continue to think, just make the LLM type out the outcome of your ideas.
I'm sure you have a great point to make, but you're making it very poorly here.
Experienced developers develop fluency in their tools, such that writing such narrow natural language directives like you suggest is grossly regressive. Often, the tasks don't even go through our head in English like that, they simply flow from the fingers onto the screen in the language of our code, libraries, and architecture. Are you familiar with that experience?
What you suggest is precisely like a fluently bilingual person, who can already speak and think in beautiful, articulate, and idiomatic French, opting to converse to their Parisian friend through an English-French translation app instead of speaking to them directly.
When applied carefully, that technique might help someone who wants to learn French get more and deeper exposure than without a translation app, as they pay earnest attention to the input and output going through their app.
And that technique surely helps someone who never expects to learn French navigate their way through the conversations they need to have in a sufficient way, perhaps even opening new and eventful doors for them.
But it's an absolutely absurd technique for people whose fluency is such that there's no "translating" happening in the first place.
>>Experienced developers develop fluency in their tools
>>You can see that right?
I get it, but this as big a paradigm shift as much as Google and Internet was to people in the 90s. Some times how you do things changes, and that paradigm becomes too big of a phenomenon to neglect. Thats where we are now.
You have to understand sometimes a trend or a phenomenon is so large that fighting it is pointless and some what resembles luddite behaviour. Not moving on with time is how you age out and get fired. Im not talking about a normal layoff, but more like becoming totally irrelevant to whatever that is happening in the industry at large.
That could totally be something that we encounter in the future, and perhaps we'll eventually be able to see some through line from here to there. Absolutely.
But in this thread, it sounds like you're trying to suggest we're already there or close to it, but when you get into the details, you (inadvertently?) admitted that we're still a long ways off.
The narrow-if-common examples you cited slow experienced people down rather than speed them up. They surely make some simple tasks more accessible to inexperienced people, just like in the translation app example, and there's value in that, but it represents a curious flux at the edges of the industry -- akin VBA or early PHP -- rather than a revolutionary one at the center.
It's impactful, but still quite far from a paradigm shift.
From actual software output it seems to me like the big SaaS LLM:s compete with Wordpress and to some extent with old school code generation. It does not look like a paradigm shift to me. Maybe you can explain why you're convinced otherwise.
Some quite large organisations have had all-hands meetings and told their developers that they must use LLM support and 'produce more', we'll see what comes of it. Unlike you I consider it to be a bad thing when management and owners undermine workers through technology and control (or discipline, or whatever we ought to call the current fashion), i.e. the luddites were right and it's not a bad thing to be a luddite.
I very much want to include more ai in my workflows. But every time I try it slows me way down. Then when people give examples like yours it feels like we are just doing different tasks all together. I can write the code you mention above much faster than the English to describe it in half a dozen languages. And it will be more precise.
Perhaps the written word just doesn’t describe the phenomenon well. Do you have any goto videos that show no -toy examples of pairing with an ai that you think illustrate your point well?
> can write the code you mention above much faster than the English to describe it in half a dozen languages
Especially if you’re fluent in an editor like Vim, Emacs, Sublime, have setup intellisense and snippets, know the language really well and are very familiar with the codebase.
Which were all tools I had to learn and slowed me down while I was learning them. So I’m extremely sympathetic to the idea that ai can be a productivity enhancer.
But those had obvious benefits that made the learning cost sensible. “I can describe in precise English, a for loop and have the computer write it” flat sounds backwards.
> iterate over the above list and remove all strings matching 'apple'
> open file and write the above list etc etc kind of stuff.
Honestly, I can write the code faster to do those things than I can write the natural language equivalent into a prompt (in my favored language). I doubt I could have gotten there without actually learning by doing, though.
This feels an awful lot like back in the day when we were required to get graphing calculators for math class (calculus, iirc), but weren't allowed to get the TI-92 line that also had equation solvers. If you had access to those, you'd cripple your own ability to actually do it by hand, and you'd never learn.
Then again, I also started programming with notepad++ for a few years and didn't really get into using "proper" editors until after I'd built up a decent facility for translating mind-to-code, which at the time was already becoming a road less travelled.
It writes blocks of 50 - 100 lines fairly fast in one go, and I work in such block chunks one at a time. And I keep going as I already have a good enough idea of how a program could look like. I can write some thing like a 10,000(100 such iterations) line Perl script in no time. Like in one/two working day(s).
Thats pretty fast.
If you are telling me I ask it to write the entire 20,000 line script in one go, that's not how I think. Or how I go about approaching anything in my life, let alone code.
To go a far distance, I go in cycles of small distances, and I go a lot of them.
> It writes blocks of 50 - 100 lines fairly fast in one go, and I work in such block chunks one at a time.
How many lines of natural language do you have to write in order to get it to generate these 50-100 lines correctly?
I find that by the time I have written the right prompt to get a decently accurate 100 lines of code from these tools, which I then hage to carefully review, I could have easily written the 100 lines of code myself
That doesn't make the tool very useful, especially because reviewing the code it generates is much slower than writing it myself
Not to mention the fact that even if it generates perfect bug free code (which I want to emphasize: it never ever seems to do) if I want to extend it I still have to read it thoroughly, understand it, and build my own mental model of the code structure
And there’s the value of snippets and old code. Especially for automation and infrastructure tasks. If your old project is modular enough, you can copy paste a lot of the boilerplate.
I've spent a lot of time trying to make Aider and Continue do something useful, mainly with Qwen coder models. In my experience they suck. Maybe they can produce some scaffolding and boilerplate but I already have deterministic tools for that.
Recently I've tried to make them figure out an algorithm that can chug through a list of strings and collect certain ones, grouping lines with one pattern under the last one with another pattern in a new list. They consistently fail, and not in ways that are obvious at a glance. Fixing the code manually takes longer than just writing the code.
Usually it compiles and runs, but does the wrong thing. Sometimes they screw up recursion and only collect one string. Sometimes they add code for collecting the strings that are supposed to be grouped but don't use it, and as of yet it's consistently wrong.
They also insist on generating obtuse and wrong regex, sometimes mixing PCRE with some other scheme in the same expression, unless I make threats. I'm not sure how other people manage to synthesise code they feel good about, maybe that's something only the remote models can do that I for sure won't send my commercial projects to.
You dont outsource your thinking to the tool, You do the thinking and let the tool type it for you.