Simplicity. This automatically tests the whole page "for free".
Robustness of feedback for time spent. A small visual change could cascade into lots of tests cases that must be updated (rendering is hard, look how long it took IE to get it right). You don't need to update any tests with this tool, you just give the OK to a visual diff with expected changes.
Dynamic content. The BBC homepage is going to be different every day. Writing tests to test this could be arduous and error prone. Testing production content on build 779 vs 780 is a simple way to test regressions.
More importantly, you don't need someone technical to tell if it's an important change. This can easily be signed off by QA, designers, PMs, etc. They can see what it was before, after and what changed and decide if that's OK.
Also, the difference between a wanted visual change and a break is dependent on the change you want. That makes this quite a flexible tool (it's not just "any difference is wrong") that can be used simply and is unlikely to have false negatives (while making it easy to ignore false positives).
Simplicity. This automatically tests the whole page "for free".
Robustness of feedback for time spent. A small visual change could cascade into lots of tests cases that must be updated (rendering is hard, look how long it took IE to get it right). You don't need to update any tests with this tool, you just give the OK to a visual diff with expected changes.
Dynamic content. The BBC homepage is going to be different every day. Writing tests to test this could be arduous and error prone. Testing production content on build 779 vs 780 is a simple way to test regressions.