From a practical point of view, it's really in the name: Independence. PCA is great for finding a lower dimensional representation capturing most of what is going on (the basis vectors will be uncorrelated but can be hard to interpret). ICA is great for finding independent contributions you might want to pull out or analyze separately (the basis vectors are helpful in themselves).
PCA is very practical for dimensional reduction, ICA for blind source separation.
You wouldn't usually use ICA for dimensional reduction unless you have a known contribution you want to get rid of, but for some reason have difficultly identifying it.
Shooting from the hip here, but I think ICA was originally designed for the blind source separation problem. PCA is over 100 years old and the original dimensionality reduction algorithm.
(Some of hardest parts of ML imho is more in selection)