AI assistants just slow me down. Its a very rare case i find them actually useful. I am generally concerned by the amount of devs that seem to claim that it is useful. What on earth are yall accepting.
I find the only "AI" I need is really just "Intellisense". Just auto complete repetitive lines or symbol names intelligently, and that doesn't even require an AI model.
Stackoverflow is used when im stuck and searching around for an answer. Its not attempting to do the work for me. At a code level I almost never copy paste from stackoverflow.
I also utilize claud and 4o at the same time while attempting to solve a problem but they are rarely able to help.
Kubernetes, AWS, Cloudformation and Terraform etc sort of work is still not good with AI.
The current AI code rocket ship is VSCcode + Perl/Python/Node+ReactJS + Co-Pilot.
This is basically a killer combination. Mostly because large amounts of Open source code is available out there for training models.
Im guessing there will be an industry wide standardisation, and Python use will see a further mad rise. On the longer run some AI first programming language and tooling will be available which will have first class integration with the whole workflow.
For now, forget about golang. Just use Python for turbo charged productivity.
I write kubernetes controllers. Golang is here to stay.
> Just use Python for turbo charged productivity
This is my problem with all the "AI" bros. They seem to consistently push the idea that quickly writing code is the end all of "productivity" its akin to "just shovel more shit faster its great"
I have seen several rounds of this over decades. Google will make bad programmers, Perl is a write only language, Node is cancer, Eclipse is slow etc etc.
Eventually you realise you just can't win against better things. These are juggernauts, fighting them is pointless. Because it doesn't matter when you use it or not. Most people will, and make great progress.
You will either be out of the industry or be forced to use it one way or the other.
K8s is probably particularly bad because their package convention basically requires vanity imports, and I would wager the vanity names people choose are wildly inconsistent.
It also doesn’t help that many packages have 3+ “versions” of themselves, so a vanity import named “core” could be v1alpha1, v1beta1 or v1.
- Does this language have X (function, methods,...) probably because I know X from another language and X is what I need. If it does not, I will code it.
- How do I write X again? Mostly when I'm coming back to a language I haven't touch for a while. Again I know what I want to do, just forgot the minutia about how to write it.
- Why is X happening? Where X is some cryptic error from the toolchain. Especially with proprietary stuff. There's also how to do X where X is a particular combination of steps and the documentation is lacking. I heard to forums in that case to know what's happening or get sample code.
I only need the manual/references for the first two. And the last one needs only be done once. Accuracy is a key thing for these use cases and I'd prefer snippets and scaffold (deterministic) instead of LLMs for basic code generation.
I use llms exactly and exclusively for the first two cases - just write comments like:
// map this object array to extract data, and use reduce to update the hasher
And let llms do the rest. I rarely find my self back to the browser - 80% of the time they spit out a completely acceptable solution, and for the rest 20% at least the function/method is correct. Saved me much time from context switching.
For me the quick refresh is better as I only need to do it once (until I don't use the language/library again) and that can be done without internet (local documentation) or high power consumption (if you were using local models). And with a good editor (or IDEs) all of these can be automated (snippets, bindings to the doc browser,...) and for me, it's a better flow state than waiting for a LLM to produce output.
P.S.I type fast. So as soon as I got a solution in my head, I can write it quickly and if I got a good REPL or Edit-Compile-Run setup, I can test just as fast. Writing the specs, then waiting for the LLM's code and then review it to check feel more like being a supervisor than a creator and that's not my kind of enjoyable moment.
I agree with you, creating something just feels better than reviewing code from a LLM intern ;D
That's why I almost never use the 'chat' panel in those AI-powered extensions, for I have to wait for the output and that will slow me down/kick me out of the flow.
However, I still strongly recommend that you have a try at *LLM auto completion* from Copilot(GitHub) or Copilot++(Cursor). From my experience it works just like context aware, intelligent snippets and heck, it's super fast - the response time is 0.5 ~ 1s on average behind a corporate proxy, sometimes even fast enough to predict what I'm currently typing.
I personally think that's where the AI coding hype is going to bear fruit - faster, smarter, context+documentation aware small snippets completion to eliminate the need for doc lookups. Multi file editing or full autonomous agent coding is too hyped.
I'm just as baffled by the people who use stackoverflow daily. Its increasingly rare that I use it these days, to the point where I deleted my account a few years back and haven't missed it. Don't people read docs anymore? In many ways I feel lucky that I learned at a time when I only had offline docs, which forced me to become good at understanding documentation since its all I had.
To give you some insights from someone with a different starting point:
For context I'm a 22 year old CS student and part-time SRE working on everything related to kubentes (golang, scripting, yaml, ...).
I can assure you that reading the fucking manual isn't a thing my fellow students or I did when we could avoid it. I think that might be because university projects don't tend to be long lasting and finding quick pre build solutions - without understanding them - works just fine. There is no penalty for technical debt.
Now almost exclusively read the primary docs or code and I think that might (surprisingly?) be because of copilot.
The neovim copilot extension resulted in me not feeling the need to switch to my browser all the time. Not having to do this context switch and looking more at my code got me into reading the lsp provided symbol docs. After some time I noticed that copilot just made me feel like I know what I'm doing and reading the overlay docs provided a way deeper understanding.
Do you think this helps or hinders your ability to internalise the information (ie so that you don’t need to look it up, in the browser or from the LSP)?
For me, I feel that documentation is a starting point, but the goal is always to not need to look it up, after a little ramp up time.
With that said, I do use ChatGPT as a replacement for documentation sometimes, asking it how to do things instead of looking it up, but again the goal is to internalise it rather than to rely on the docs or tools. I won’t shy away from reading primary documentation, though, when necessary.
It showed me some nice shortcuts (quick anon js functions and the like) which I will be using in the future, but I noticed that I didn't remember multi step code flows. For example while trying to get the response from an http request in go, there is a chain of calls which you will most likely follow. Building the client > making the request > checking the response code > reading the body > maybe parsing the body if it's structured text.
I had written this kind of flow hundreds of times while having copilot running, and I still could not write it myself - I just had this abstracted idea of what's happening, but no memory of the syntax.
> as a replacement for documentation
I feel like they are too focused. And not having to go through the docs to find the piece I'm searching for results in me missing out on important context / possibly even better ways to solve my problem.
> If it did not then I dont see how looking up with AI can be worse
Looking up with AI is worse because its WRONG a lot more. Random rabbit holes, misdirection. Stuff that SOUNDS right but is not. It takes a lot of time and energy to discern the wheat from the chaff.
Sure you can find misleading or outdated blogposts or forum discussions with a google search but the information is far more grounded in correctness then anything from an LLM.
This is my experience with trying to use AI for coding tasks too. I've had back-and-forths with AI that involve me trying to get it to fix things to get a final working function, but since it doesn't actually understand the code, it fails to implement the fixes correctly.
Meanwhile, the stuff you find through a traditional web search tends to either be from a blog post where someone is posting actual working code snippets, or from StackOverflow where the code tends to be untested initially but then gets comments, votes, and updates over time that help boost confidence in the code. It's far more reliable to do a web search.
Because it is? I tend to do my queries as keywords instead as questions and I tend to get good result. But most of the time, I'm just seeking the online manual to understand how things works and what is happening, not an exact solution. It's the equivalent of using a library to write a thesis. That only requires to get familiar with the terminology of the domain, know where the best works are and how to use indexes and content tables.
>>Sure you can find misleading or outdated blogposts or forum discussions with a google search but the information is far more grounded in correctness then anything from an LLM.
This was the case only 2 - 3 months back. But the assistants all moved to GPT-4/Sonnet and the newer versions are just a whole lot better and accurate.
That's the whole idea behind AI, when you do find something is wrong, the error function kicks in and the weights are tweaked to more correct values.
When GPT-5 comes along it will be another whole level accurate. In fact its already close to 90% accurate for most tasks with GPT-5 you can say that number could go to 95% or so. Which is actually fairly good enough for nearly all the production work you could do.
Of course in coming years, Im guessing coding without AI assistance will be somewhat similar to writing code on paper or something like that. You can still do it for fun, but you won't be any where productive at a job.
I use gpt4o and sonnet regularly. They are so often wrong. Just yesterday gpt4o spit out consistently incorrect tree sitter queries and refused to accept it was wrong. Its all so pointless and slowed me down compared to just reading the documentation.
Some of Cursor's features is appeals to my lazyness, say: "convert to javascript" and hit apply... For now its still a bit slow (streaming words) but when this is immediate? Not a change against the fastest Vimmer. Select code, dictate the change, review, apply - will save my wrists.