If you do build from source, it should work (Instructions below):
https://github.com/ollama/ollama/blob/main/docs/development....
The reason why it's not in released builds is because we are still testing ROCm.
You can be a linux/python dev and set up rocm.
Or you can run llama.cpp's very slow OpenCL backend, but with easy setup.
Or you can run MLC's very fast Vulkan backend, but with no model splitting and medium-hard setup.