Crowd computing / peer learning would be a great way for the community to compete with the tech giants. 10000 nvidia home gpu's might beat Googles TPUs.
IMHO, 10K or even 100K GPUs bottle necked by bandwidth/network speed might be hard to compete with giants. And there is a also problem of optimizing the ML/DL training to work in distributed architecture.
But this must happen for
1. It's a research area that should be explored!
2. 10K GPUs are still better than 1000 or 3000 in lab GPUs!