A regularized singular value decomposition is one of the most powerful machine learning algorithms available. Two decent open source implementations are SciPy (http://www.scipy.org/) and SVDLIBC (http://tedlab.mit.edu/~dr/svdlibc/).
There's also been research in academic circles regarding the use of the much more efficient (though nondeterministic) CUR decomposition in the areas of network analysis and collaborative filtering.
Although SVD can be applied to images, it's not a particularly good compression method. It's fairly expensive to compute, and the Discrete Cosine Transform (JPEG) or Wavelet Transform (JPEG2000) provide similar quality at a fraction of the cost.
The fundamental problem is that SVD is a linear technique -- it finds the best linear approximation to a matrix. However, real-world images contain several non-linear effects which are very expensive to approximate linearly.
In recent years, however, various newer techniques have been developed which use SVD (actually, the closely related Principal Components Analysis (PCA)) as a building block for achieving good compression.
One such approach is "Clustered Blockwise PCA for Representing Visual Data" -- which applies PCA to different regions of an image and then another PCA on all of the resulting blocks. The intuition is that although the whole image is not linear, many regions within it are indeed linear and thus compress well, individually.
If you examine the images compressed using SVD, you can actually see the effects of the linear approximation in the form of light/dark bands. Great link. Thanks.
Timely Development also posted the full c++ code they used for the Netflix Prize http://www.timelydevelopment.com/demos/NetflixPrize.aspx