Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> "Frontier LLMs can do it with enough context" is not really a strong argument against fine-tuning, because they're expensive to run.

I am not expert in this topic, but I am wondering if large cached context is actually cheap to run and frontier models would be cost efficient too in such setting?



I'd like to read more about that if anyone has any suggestions.


I am not expert in this topic, but its easy to observe that price for cached tokens is usually 10x cheaper on major providers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: