hmm seems you're replying as a customer, but not as a GPU vendor...
the thing is, there's not enough competition in the AI-GPU space.
Current only option for no-wasting-time on running some random research project from github? buy some card from nvidia. cuda can run almost anything on github.
AMD gpu cards? that really depends...
and gamers often don't need more than 12?gb of GPU ram for running games on 4k.. so most high-vram customers are on the AI field.
> If you could take the card from five years ago and load it up with 80 GB of VRAM, you'd still not see the memory bandwidth of a newly-bought H100.
this is exactly what nvidia will fight against tooth-and-nail -- if this is possible, its profit margin could be slashed to 1/2 or even 1/8
the thing is, there's not enough competition in the AI-GPU space.
Current only option for no-wasting-time on running some random research project from github? buy some card from nvidia. cuda can run almost anything on github.
AMD gpu cards? that really depends...
and gamers often don't need more than 12?gb of GPU ram for running games on 4k.. so most high-vram customers are on the AI field.
> If you could take the card from five years ago and load it up with 80 GB of VRAM, you'd still not see the memory bandwidth of a newly-bought H100.
this is exactly what nvidia will fight against tooth-and-nail -- if this is possible, its profit margin could be slashed to 1/2 or even 1/8