Ollama is the easiest. For coding, use VSCode Continue extension and point it to ollama server.
The thing to watch out for (if you have exposable income) is new RTX 5090. Rumors are floating they are going to have 48gb of ram per card. But if not, the ram bandwidth is going to be a lot faster. People who are on 4090 or 3090s doing ML are going to go to those, so you can pick up a second 3090 for cheap at which point you can load higher parameter models, however you will have to learn hugging face Accelerate library to support multi gpu inference (not hard, just some reading trial/error).
The thing to watch out for (if you have exposable income) is new RTX 5090. Rumors are floating they are going to have 48gb of ram per card. But if not, the ram bandwidth is going to be a lot faster. People who are on 4090 or 3090s doing ML are going to go to those, so you can pick up a second 3090 for cheap at which point you can load higher parameter models, however you will have to learn hugging face Accelerate library to support multi gpu inference (not hard, just some reading trial/error).