Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The models are improving in a variety of ways, whether by being larger, faster, using the same number of parameters more effectively, better RLHF techniques, better inference-time compute techniques, etc.

I didn't say they weren't improving.

I said there's diminishing returns.

There's been more effort put into LLMs in the last two years than in the two years prior, but the gains in the last two years have been much much smaller than in the two years prior.

That's what I meant by diminishing returns: the gains we see are not proportional to the effort invested.




You said we're in a local maximum. Your comment was at odds with itself.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: