Hacker News new | past | comments | ask | show | jobs | submit login

Check out Ollama, it's built to run models locally. Llama3 8b runs great locally for me, 70b is very slow. Plenty of options.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: