Hacker News new | past | comments | ask | show | jobs | submit login

> willing to pay a lot to avoid feeding data to MSFT/GOOG/META.

Right now, you can't pay a lot and get a local LLM with similar performance to GPT-4.

Anything you can run on-site isn't really even close in terms of performance.

The ability to finetune to your workplaces terminology and document set is certainly a benefit, but for many usecases that doesn't outweigh the performance difference.





“* According to a fun and non-scientific evaluation with GPT-4. Further rigorous evaluation is needed.”




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: