Hacker News new | past | comments | ask | show | jobs | submit login

Like many others, I’m also building my own platform to accomplish this. What I’ve learned is the document preparation is key in getting the LLM to answer correctly. The text splitting portion is a crucial step here. Picking the correct splitter and parameters for your use case is important. At first I was getting incorrect or made up answers. Setting up a proper prompt template and text splitting parameters fixed the issue for the most part and now I have 99% success.

Also, the local model used makes a big difference. Right now wizard-mega and manticore are the best ones to use. I run the 16b ggml versions in an M2 Pro and it takes about 30 seconds to “warm up” and produce some quality responses.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: