Hacker News new | past | comments | ask | show | jobs | submit login

I've observed given that LLM's inherently want to autocomplete, they're more inclined to keep complicating a solution than rewrite it because it was directionally bad. The most effective way i've found to combat this is to restart a session and prompt it such that it produces an efficient/optimal solution to the concrete problem... then give it the problematic code and ask it to refactor it accordingly



I've observed this with ChatGPT. It seems to be trained to minimize changes to code earlier in the conversation history. This is helpful in many cases since it's easier to track what it's changed. The downside is that it tends to never overhaul the approach when necessary.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: