Man. Obligatory old person remark, but seeing this kind of a quantum leap in performance per watt makes me feel young and tingly all over again. If M1 makes SIMD suck less, there are downstream effects. It's true if that were /entirely/ the case, Radeon would have stolen the crown from NVidia years ago.
But this is different. There's the convenience factor here. How far does M1 have to go before commodity DL training is feasible on commodity compute, not just GPU?
But this is different. There's the convenience factor here. How far does M1 have to go before commodity DL training is feasible on commodity compute, not just GPU?