Hacker News new | past | comments | ask | show | jobs | submit login

Why would the LLM be any good at determining where the input should be handled?



There's a long history of "this looks hard, I won't implement/fix it" in GitHub issues the LLM can train on.


And then the LLM can probably know whether an issue can be fixed or not. That’s far from generalizing the problem though.


LLMs have shown some limited ability to generalize.

(Opinion) I think internally, they record how closely 2 words are in meaning based on the training data.

If everyone uses similar language to describe different problems, then the LLM should be able to at least act like it’s generalizing


GPT4 uses a "mixture of experts" system which is already sort of kind of like doing this.


The first tasks of gpt was to evaluate the emotion of a sentence.

Like determine if a comment/review is good or bad.


can probably easily be interpreted from the text?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: