The processes of training and running use similar low-level operations, but training obviously requires a lot more of them. So it’s relevant to both, but one is much more computation-heavy to begin with.
The point is that at the moment only a few of us are interested in training our own models, but this will change gradually as Apple introduces more machine learning models trained on our own local data behind the scenes. At that point, Macs with less support for this will start to slow down (well, or they’ll just not support those features). And the M1 chip will be worse in this regard than the M2.
As long as you don't mind your machine using lots of power when plugged in overnight (and training then), the old machines should be able to keep up for quite a while?
I'd love to see training working on arm + mps macs. Hopefully someone knowledeable chimes in, that it's actually possible.
I tried running some mpx supported conda/pytorch inference (whisper to be precise) on a workplace issued M1 mac and it did fall back to cpu. I encountered:
- `NotImplementedError: The operator 'aten::_fft_r2c' is not currently implemented for the MPS device`
- `TypeError: Trying to convert ComplexFloat to the MPS backend but it does not have support for that dtype`.
EDIT: just read up it might have more to do with audio processing.
Fingers crossed for better mac support in ML in general! Competition is a great thing
Yes, and the article writer is suggesting that more AI training will be done on the client computer, and gave spelling and grammar checking as an example.
I don't know much about AI and stuff, but I know Apple's been doing machine learning / image recognition and the like for a long time now, so that the Photos app on your phone can recognize the same people in photos or categorise them - all done on device instead of uploading your photos.