Hacker News new | past | comments | ask | show | jobs | submit login

I really hope GPT5 is good. GPT4 sucks at programming.



It's excellent at programming if you actually know the problem you're trying to solve and the technology. You need to guide it with actual knowledge you have. Also, you have to adapt your communication style to get good results. Once you 'crack the pattern' you'll have a massive productivity boost


In my experience 3.5 was better at programming than 4, and I don't know why.


It's better than at least 50% of the developers I know.


A developer that just pastes in code from gpt-4 without checking what it wrote is a horror scenario, I don't think half of the developers you know are really that bad.


What kind of people are you working with?


It's not better than any of the developers I work with.

Trying to talk it into writing anything other than toy code is an exercise in banging my head against the wall.


Look to a specialized model instead of a general purpose one


Any suggestions? Thanks

I have tried Phind and anything beyond mega junior tier questions it suffers as well and gives bad answers.


You have to think of the LLMs as more of a better search engine than something that can actually write code for you. I use phind for writing obscure regexes, or shell syntax, but I always verify the answer. I've been very pleased with the results. I think anyone disappointed with it is setting the bar too high and won't be fully satisfied until LLMs can effectively replace a Sr dev (which, let's be real, is only going to happen once we reach AGI)


Yea, I use them daily and that’s my issue as well. You have to learn what to ask or you spend more time debugging their junk than being productive, at least for me. Devv.ai is my recent try, and so far it’s been good but library changes quickly cause it to lose accuracy. It is not able to understand what library version you’re on and what it is referencing, which wastes a lot of time.

I like LLMs for general design work, but I’ve found accuracy to be atrocious in this area.


> library changes quickly cause it to lose accuracy

yup, this is why an LLM only solution will not work. You need to provide extra context crafted from the language or library resources (docs, code, help, chat)

This is the same thing humans do. We go to the project resources to help know what code to write


Fwiw that's what Devv.ai claims to do (in my summation from the Devv.ai announcement, at least). Regardless of how true the claims of Devv.ai are, their library versioning support seems very poor. At least for the one library i tested it on (Rust's Bevy).


kapa.ai is another SaaS focused on per-project LLMs

As a developer, you would want something like this, which has access to all the languages / libraries you actually use


It will be a system, not a single model, and will depend on what programming task you want to perform

probably need routers, RAG, and reranking

I think there is a role for LLM + deterministic code gen as well (https://github.com/hofstadter-io/hof/blob/_dev/flow/chat/pro...)


Interesting. I was hoping for something with a UI like chat gpt or phind.

Something that I can just use as easily as copilot. Unfortunately every single one sucks.

Or maybe that's just how programming is - its easy at the surface/ice berg level and below is just massive amounts of complexity. Then again, I'm not doing menial stuff so maybe I'm just expecting too much.


I think a more IDE native experience is better than a chat UI

I don't want to have to copy & paste between applications, just let me highlight some sections and then run some LLM operation on it

i.e. a VS Code extension with keyboard shortcuts




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: