Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No, for now, I’ve only made it work with Ollama, but it could be ideal to do it directly on llama.cpp. Thank you, I’ll take note of it.



That would be great. Llama.cpp’s built in server offers HTTP embedding endpoints.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: