Many of the input pictures look like they were constructed from stock backgrounds. The one with the Golden Gate Bridge from the Marin headlands, for example. That was taken from here[1], but the fence isn't in the picture. Either the model was standing on something to gain height, or was added later. So the algorithm is sometimes just undoing compositing.
How does a study from something like this get into a filter in photoshop? Can anyone implement the features or is it understood to be copy written by the academic journal authors?
Hmm, there are a few interesting things not in photoshop et-al. For a while now, I've been wondering if some kind of "code bounty" site exists where suggestions on new pieces of software are given, and efforts combined. Even on HN, you often see people say "I'm working on this/something.." in regards to articles - A collaboration and quasi "ShowHN" site would be good.
Actually, there was something called upboat.us, but I think that closed...
Apparently algorithmic rotoscoping (making a mask for separate object(s) in a scene) is called "segmentation" in academia.
[1] http://people.eecs.berkeley.edu/~jonlong/long_shelhamer_fcn....