Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Did it know that before the last LLM failure was posted on Twitter or Hackernews? Trawling tech media for LLM failures can be assumed to be part of the "human feedback".


Yes, the models are not constantly learning. They only update their knowledge when they are retrained, which is pretty infrequently (I think the base GPT models have not been retrained, but the chat laters on top might).


It doesn't continually learn anything. Though some models can do web browsing and be guided by the results of that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: