Hacker News new | past | comments | ask | show | jobs | submit login

The only thing I've used GPT for is generating commit messages based on my diff, because it's better than me writing 'wip: xyz' and gives me a better idea about what I did before I start tidying up the branch.

Even if I wanted to use it for code, I just can't. And it's actually make code review more difficult when I look at PRs and the only answer I get from the authors is "well, it's what GPT said." Can't even prove that it works right by putting a test on it.

In that sense it feels like shirking responsibility - just because you used an LLM to write your code doesn't mean you don't own it. The LLM won't be able to maintain it for you, after all.




"it's what GPT said" should be a fireable offense


I wouldn't go that far; we all want to be lazy. Using it as a crutch and assuming everyone else uses GPT so it's all good - well, nobody is going to understand it any more.

Half of the stuff GPT comes up with in the reviews I could rewrite much more simply and directly, while improving code comprehension.


That may be a bit much, but I'd think it grounds for sitting down with the person in question to discuss the need for understanding the code they turn in.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: