Hacker News new | past | comments | ask | show | jobs | submit login

There's a long history of "this looks hard, I won't implement/fix it" in GitHub issues the LLM can train on.



And then the LLM can probably know whether an issue can be fixed or not. That’s far from generalizing the problem though.


LLMs have shown some limited ability to generalize.

(Opinion) I think internally, they record how closely 2 words are in meaning based on the training data.

If everyone uses similar language to describe different problems, then the LLM should be able to at least act like it’s generalizing




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: