Hacker News new | past | comments | ask | show | jobs | submit login

Anecdotally but I still get significantly better results from ChatGPT than claude for coding.

Claude is way less controllable it is difficult to get it to do exactly what I want. ChatGPT is way easier to control in terms of asking for specific changes.

Not sure why that is maybe the chain of thought and instruction tuning dataset has made theirs a lot better for interactive use.




For me it's the opposite; chatgpt (o1 preview and 4o) keep making very strange errors; errors that I even exactly tell it how to fix and it simply repeats the fundamental mistakes again. With claude, I did not have that.

Example; I asked it to write some js that finds a button on a page, clicks the button, then waits for a new element with some selector to appear and return a ref to it; chatgpt kept returning (pseudo code);

while (true) {

button.click()

wait()

oldItems = ...

newItems = ...

newItem = newItems - oldItems

if (newItem) return newItem

sleep(1)

}

which obviously doesn't work. Claude understands to put the oldItems outside the while; even when I tell chatgpt to do that, it doesn't. Or it does one time and with another change, it moves it back in.


Try as I might, ChatGPT couldn’t give me working code for a simple admin dash layout in Vue with a sidebar than can minimise. I had to correct it, it would say my apologies and provide new code with a different error. About 10 times in a row it got in a loop of errors and I gave up.

Do any of these actually help coding?


Prompting is a skill you can develop with practice and get better at. Also, some tasks just aren’t going to work well for various reasons.

Yes, LLMs can actually help with coding. But it’s not magic. There are limits. And you get better with practice.


Without people providing their prompts, it's impossible to say whether they are skilled or not, and their complaints or claims of "it worked with this prompt" without the output are also not possible to validate.

Maybe there's a clue in there as to why these experiences seem so different. I'm glad GPTs don't get frustrated.


I have a personal policy of sharing my prompts as openly as possible. I've shared hundreds at this point - for a bunch of recent examples see https://simonwillison.net/2024/Oct/21/claude-artifacts/ and https://simonwillison.net/tags/ai-assisted-programming/


Ive spent thousands of hours, literally, learning the ropes, and continue to hone it. There is a much higher skill ceiling for prompting than there was for Google-fu.


Back in the day googling was a skill not with the Rise of LLMs Prompting is a skill


Literally ropes as in RoPE, rotary positional embeddings?


Give it one or two examples of what you want. Don't expect these things to perfectly solve every problem - they're transformation machines, so they can do pretty much anything if you figure out the right input.


Just tried it and it worked. Try this:

give me a vue js page. I want a sidebar that minimizes (if triggered). Make simple admin placeholder page.


This was about 6 months ago I think. I’ll happily give it another shot.


Maybe it's relative? Claude beats GPT-4/o by a far margin for me but I am mostly using them for Rust.


I also think there are subtle differences in how models like to be prompted, so some people will have more luck with one type of model.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: