Hacker News new | past | comments | ask | show | jobs | submit login

If they can continuously train it, it could be better than a large context as this is how a AI OS would need to work when you have constant updates to your files



I don’t think you’d be fine-tuning a whole model in such cases. That seems over the top, no? I assume you’d get sufficiently far with big context windows, vector search, RAG. Etc.


It's an interesting question. I'm not sure we really know yet what the right mix of RAG and fine tuning is. IMO small-scale fine tuning might be under-appreciated.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: