Hacker News new | past | comments | ask | show | jobs | submit login

A lot of people are looking at this wrong. A $350 3060Ti has 12GB RAM. If there's a way to run models locally, it opens up the door to:

1) Privacy-sensitive applications

2) Tinkering

3) Ignoring filters

4) Prototyping

5) Eventually, a bit of extra training

The upside isn't so much cost / performance, as local control over a cloud-based solution.




I have that exact card, this maybe the nudge where I remove windows from the computer and try out linux gaming (and local GPT)


Thing is, you don't have to totally switch to Linux. I'm running ML/CUDA workloads through WSL without too many problems.


Although not "too many," what kind of problems have you encountered running ML/CUDA in WSL? Thanks.


Not exactly the answer to your question, but I just run ML/CUDA workloads directly on Windows. PyTorch works fine.

I did not need multiGPU training so far (just run experiments in parallel), so unsure about the state of that. Additionally, torchvision does not support GPU video decoding on Windows. Those are two only limitations I faced so far.


WSL problems but not related to CUDA:

- need a patch to expose ports for services in WSL to network (WSLHostPatcher)

- the Virtual Hard Disk (vhdx) does not free unused space easily and it can grow quickly. I ended up just symlinking my code and dataset folders to mounts, not saving a lot of data inside the vhdx

- beware of upgrades etc. I think I nuked my WSL 2x due to config issues. Having all your code/data on mounts also makes this easy.

related to ML/CUDA:

- how you install pytorch + CUDA matters. I ended up just installing from `conda --channel fastchan` and don't touch it. not ideal but it works

- Don't forget to configure the RAM allocation in case you need a lot

- I haven't tried running a CUDA Docker on WSL. May be an easier way to do this.

Running on Windows directly is also an option. I choose to run on WSL because most learning resources/documentation refer more to Linux installs and setups.


I've had great results recently using Steam/Proton on Arch with my AMD 6750XT.


Ai? How did you setup ROCm on A 6750xt?


Nitpicky but the RTX 3060 (non-Ti) has a variant with 12 GB, whilst the Ti is 8 GB. Agree with your points though


Thanks.

Ti-po, I guess :)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: