Very cool. But one inconvenient is that the processing they do removes all moving objects from the image. Look at page 10 of the paper: the 8 cars present in the original image on the road are absent from the reflection-free image.
This in combination with the software (Microsoft was it?) that composites photos of people eliminating ones where people aren't smiling or are blinking would be epic.
Kinda seems like how humans perceive scenes - our brains somehow block out wire meshes, or raindrops if we are looking through them.
I believe it was the Photos app in the Windows Live pack that you're thinking of. Haven't used it in ages but it was a pretty streamlined way for non-experts to quickly edit snapshots like that without having to do more complicated stuff in Photoshop.
Not sure if the current photo app does anything similar (or even if Adobe added something like that to PS/LR) but definitely effective if the source images were all similar (as in burst photography).
Definitely would - trying to get decent clear shots out of office / transport windows is an exercise in "blocking overhead light reflection" frustration.
Ohhh this is beautiful. I have this childhood memory of my father trying to take pictures of vases behind glass in the museum, struggling to find an angle at which the polarization filter would remove most of the reflections on the glass display case...
Because academics are paid for papers and turning a paper-ready piece of software into a consumer-ready piece of software is a lot of work that could be spent on writing the next paper.
For this particular example I would guess that the implementation is a bunch of matlab scripts that need manual fiddling to work for a particular input. Since you don't have matlab on your phone, porting it is quite difficult.
They mention both the MATLAB scripts and a "C++ windows phone app" developed internally. Don't know about availability though. Indeed, turning paper algorithms into real world usable software is challenging, to say the least.
The eyelets in a fence are much larger than your average camera lens. If you can go right up to the fence, the problem is solved.
Reflections from glass are minimized by also sticking the camera lens right up to the glass, and providing some shading around it. A simple lens skirt could be developed for this purpose (if such a thing doesn't exist already).
Possible design: cone-shaped coil spring encased in opaque cloth, with rubber o-ring gaskets fitted on both ends. One gasket (narrow end) goes on the camera/phone around the lens, the flared end gasket goes onto the glass.
Yes, but users don't normally 'scan' a scene, they point, focus and click. So if you want to have this work without the 'scanning' element you'll need two offset sensors.
This would be a lot like Google's how "artificial lens blur" feature works[1]. I don't think it would be too difficult to teach a user to move their phone a little to scan the scene.
I agree, but this is a usual trick. I remember an algorithm to make a "3D" image with a normal camera that expect the user to scan the scene in a similar way to create a deep map. (I can't find the exact link just now.)
However the video is available in 144p, and starts playing before it's finished loading. The PDF takes (for me) an irritatingly long time to load in chrome, and the images require me to zoom right in to get even close to the same resolution as a fairly low quality video stream.
Well, for me in Chrome, it took 9 seconds, so Chrome has nothing to do with it. In any case, it's a professional paper for professionals to study and download time is not an issue.