Hacker News new | past | comments | ask | show | jobs | submit login

With 32 GB RAM you can do inference with quantized 34b models. I wouldn’t call that useless?

You don’t need a GPU for llm inference. Might not be as fast as it could be but usable.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: