Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Isn't a contemporary nvidia GPU essentially a vector machine with predication and scatter/gather? In CUDA it's just hidden behind the "SIMT" programming model.

If so, there's no fundamental reason why you couldn't make a decent GPU based on risc-v + the vector extension. Just make the cores themselves relatively modest, no reason to waste die area on OoO logic, and have lots of hardware threads to drive memory parallelism. Oh, and gobs of memory BW.

Though IIRC Intel had to have some fixed function blocks, and IIRC other GPU's also have some of them left.

I also recall reading about Larrabee suffering from a lot of internal BW being wasted on cache coherency traffic, but with risc-V having a weaker memory consistency model perhaps that would be less of a problem for a hypothetical "risc-v gpu"?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: