The advantage of GrabCut compared to the parent article is that it takes both color difference and similarity into account rather than just difference. This is done by combining all that information into a very cleverly-laid-out Markov Random Field where making an energy-minimizing graph cut roughly corresponds to finding a better pixel classification.
The disadvantage compared to the parent is that it's a semi-supervised algorithm: it requires a bounding box to be drawn around the desired image. If you had a bunch of very similar images you could pre-generate the Gaussian Mixture Model that GrabCut expects and turn it into a supervised learning algorithm, but then you'd lose the major innovation of GrabCut compared to the graph cut methods discovered before it: you would not be able to re-run with the newest "best-guess" and be guaranteed that the energy would monotonically decrease. It would also fail spectacularly if it encountered something it hadn't seen before.
Another thing to consider is that GrabCut is patent-encumbered for commercial use -- it relies on the mechanism described in Zabih, Boykov, and Veksler's "Fast Approximate Energy Minimization via Graph Cuts," which is patented in the US since 2004 (Patent No. 6,744,923).
I think it is called GrabCut - http://research.microsoft.com/en-us/um/cambridge/projects/vi...