Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For chat type interactions prefill is cached, prompt is processed at 400tk/s and generation is 100-107tk/s, it's quite snappy. Sure, for 130,000 tokens, processing documents it drops to, I think 60tk/s, but don't quote me on that. The larger point is that local LLMs are becoming useful, and they are getting smarter too.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: