Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I used this half a year ago, love the UX but it was not possible to accelerate the workloads using an AMD GPU. How's the support for AMD GPUs under Ollama today?


Hi, I'm one of the maintainers on Ollama. We are working on supporting ROCm in the official releases.

If you do build from source, it should work (Instructions below):

https://github.com/ollama/ollama/blob/main/docs/development....

The reason why it's not in released builds is because we are still testing ROCm.


I'm using it on an AMD GPU with the clblast backend.


Unfortunately "AMD" and "easy" are mutually exclusive right now.

You can be a linux/python dev and set up rocm.

Or you can run llama.cpp's very slow OpenCL backend, but with easy setup.

Or you can run MLC's very fast Vulkan backend, but with no model splitting and medium-hard setup.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: