Nvidia stuff 1080ti cost too much for a graduated student.
AMD is working on their own stuff supposely later this quarter we'll get tensorflow and other support on their card.
Google's TPU is not for consumer, only for cloud.
If bitmain can sell ASIC for consumer with reasonable price and support tensorflow, pytorch, etc... then it'll be good for the deep learning community.
I suspect they'd be designing for datacenters rather than consumers. That's essentially what they already do with bitcoin rigs too. It's a rented mining cloud.
There are still a lot of software side of things we as a community can do in accelerating compute speed - for example, add OpenCL support for popular AI framework/libraries, which will leverage existing hardware and non-Nivida GPUs' power.
A lot of the frameworks(keras, Theanos) uses Cuda close to the metal, so they would need to write alternative interfaces and the to be supported by these libraries
Nvidia stuff 1080ti cost too much for a graduated student.
AMD is working on their own stuff supposely later this quarter we'll get tensorflow and other support on their card.
Google's TPU is not for consumer, only for cloud.
If bitmain can sell ASIC for consumer with reasonable price and support tensorflow, pytorch, etc... then it'll be good for the deep learning community.