Pdiff allows you to specify perceptual thresholds of visibility, so diffs that are not pixel perfect can still pass if they're 'good enough'. Semi-critical if you render the input images using different browsers or at different resolutions, or if your images have any sub-pixel randomness.
Lots of people (myself included) have used pdiff successfully in production. It doesn't depend on ImageMagick (a bonus in my book). And it's already available in lots of Linux distros.
ImageMagick already ships the `compare` command [0] which does exactly this.
It also accepts a bunch of CLI flags such as -fuzz to consider colors within a certain distance as equal. That flag is very useful when dealing with JPG or similar compression that slightly alters pixels, so you can ignore these barely visible differences and focus on the 'real' differences.
Interesting, I couldn't find a tool like this that would give me a useful % difference between images, so I rolled my own tiny python version using PIL: https://github.com/nicolashahn/python-image-diff
which gives both a diff image and the % difference.
Or you can import the two images into graphics software like gimp. Invert the colours in the top image, then knock transparency for that layer down to 50%. If the images are identical, you'll just see grey. Any differences will instantly become visible.
Not sure what was the intent behind creating this.
Once upon a time (several years back), I worked on one. The intent therein was two compare images produced by different versions of a product. As such, one of the major features was motion detection [0] (apart from computing deltas between the images). We made use of OpenCV [1].
One of my team members built an entire QA tool around this idea. It can load thousands of URLs, run it once, and then after a deploy to the development servers it runs again to generate a visual diff report. A combination of Golang, Node.js(PhantomJS), and some other bits I forget at the moment.
The product managers and content teams love it since they can quickly skim through a report to find visual bugs that may have gone unnoticed otherwise.
If you need workflows, there's also VisualReview [1] with some basic workflow for approval testing, and Applitools [2], a commercial tool with integration with Selenium, Appium and Protractor that can do screenshot and video comparisons. I wrote about them as part of the 'Automated testing: back to the future' tools review [3]. In the article, I also mentioned DomReactor, but that seems to be discontinued now.
I wrote a tool that compares images using Applitools for Automattic during the React Wordpress admin rewrite a while back. The idea is you'd have a style guide with examples of all your ui components and render and compare them in various states with various test data. https://github.com/davidjnelson/css-visual-test
I have been researching visual review processes for a project[0] and never ran across VisualReview. I have mainly focused on the screenshot diffing algorithm, but planned on creating a nice notification/approval UX. It might be time for a little collaboration.
Yep, I used to work somewhere that offered a multi-tenant SaaS app that customers could style themselves. We needed to make sure changes we made for a new release wouldn't cause issues with their custom styles. Threw together something that use IM and WebDriver to capture images and compare them.
I hacked up something quickly for a one-off need to render two websites as images and diff them a few years back (warning - I'm not a developer (clearly)) - https://github.com/sammcj/urldiff
http://www.imagemagick.org/Usage/compose/#difference