Yeah, I certainly don't mean to imply that's the only reason. There are MANY reasons to hate LLMs and people all up and down the spectrum hate them for any number of reasons. I definitely think utility is still language specific as well (LLMs are just terrible with some languages), project specific, etc.
I think currently there are prompts and approaches that help ensure functions stay small and easy to reason about but it's very context dependent. Certainly any language or framework that has large amount of boilerplate will be less painful to work with if you hate boilerplate, I think that could arguably be increasing enshitification though in a sense. The people who say tons of code is being generated and it will all come crashing down in an unmaintainable mess... I do kinda agree.
I'm glad I am not writing code in medical/flight control systems or something like that, I think LLMs can be used in that context but idk if they would save or increase time?
Certain types of tasks require greater precision. Like in working with wood, framing a house is fine but building a dovetailed cabinet drawer is not on the table if that makes sense?
My impression is that at this point work in high precision environments is still in the human domain and LLMs are not. Multi-agent approaches maybe, treating humans like the final agent in multi-agent approaches, maybe, idk, I'm not working on any life or death libraries or projects ATM but I do feel good about test coverage so maybe that's good enough in a lot of cases.
People who say non-devs can dev with ai or cursor, I think at this point that's just a way of getting non-technical people to burn tokens and give them more money, but idk if that will be true in six months you know?
I think currently there are prompts and approaches that help ensure functions stay small and easy to reason about but it's very context dependent. Certainly any language or framework that has large amount of boilerplate will be less painful to work with if you hate boilerplate, I think that could arguably be increasing enshitification though in a sense. The people who say tons of code is being generated and it will all come crashing down in an unmaintainable mess... I do kinda agree.
I'm glad I am not writing code in medical/flight control systems or something like that, I think LLMs can be used in that context but idk if they would save or increase time?
Certain types of tasks require greater precision. Like in working with wood, framing a house is fine but building a dovetailed cabinet drawer is not on the table if that makes sense?
My impression is that at this point work in high precision environments is still in the human domain and LLMs are not. Multi-agent approaches maybe, treating humans like the final agent in multi-agent approaches, maybe, idk, I'm not working on any life or death libraries or projects ATM but I do feel good about test coverage so maybe that's good enough in a lot of cases.
People who say non-devs can dev with ai or cursor, I think at this point that's just a way of getting non-technical people to burn tokens and give them more money, but idk if that will be true in six months you know?