Hacker News new | past | comments | ask | show | jobs | submit login
A Computational Approach for Obstruction-Free Photography [pdf] (csail.mit.edu)
121 points by jestinjoy1 on Aug 5, 2015 | hide | past | favorite | 37 comments



Very cool. But one inconvenient is that the processing they do removes all moving objects from the image. Look at page 10 of the paper: the 8 cars present in the original image on the road are absent from the reflection-free image.


If the cars are what you wanted in the picture, it is a shortcoming. However, they are not part of the background (i.e. static imagery).


I would love to have the app which they talk about in this paper on my phone. I suspect that people would pay good money for this.


This in combination with the software (Microsoft was it?) that composites photos of people eliminating ones where people aren't smiling or are blinking would be epic.

Kinda seems like how humans perceive scenes - our brains somehow block out wire meshes, or raindrops if we are looking through them.


I believe it was the Photos app in the Windows Live pack that you're thinking of. Haven't used it in ages but it was a pretty streamlined way for non-experts to quickly edit snapshots like that without having to do more complicated stuff in Photoshop.

Not sure if the current photo app does anything similar (or even if Adobe added something like that to PS/LR) but definitely effective if the source images were all similar (as in burst photography).


Definitely would - trying to get decent clear shots out of office / transport windows is an exercise in "blocking overhead light reflection" frustration.


Their ability to recover the reflected image seems valuable to security agencies.


Ohhh this is beautiful. I have this childhood memory of my father trying to take pictures of vases behind glass in the museum, struggling to find an angle at which the polarization filter would remove most of the reflections on the glass display case...


How come I see videos from academics for really really cool things like this. But I can't buy an app that does it?


Because academics are paid for papers and turning a paper-ready piece of software into a consumer-ready piece of software is a lot of work that could be spent on writing the next paper.

For this particular example I would guess that the implementation is a bunch of matlab scripts that need manual fiddling to work for a particular input. Since you don't have matlab on your phone, porting it is quite difficult.


They mention both the MATLAB scripts and a "C++ windows phone app" developed internally. Don't know about availability though. Indeed, turning paper algorithms into real world usable software is challenging, to say the least.


This research was partly sponsored by Google, so I imagine the tech will ultimately find its way into the Android camera app.


The eyelets in a fence are much larger than your average camera lens. If you can go right up to the fence, the problem is solved.

Reflections from glass are minimized by also sticking the camera lens right up to the glass, and providing some shading around it. A simple lens skirt could be developed for this purpose (if such a thing doesn't exist already).

Possible design: cone-shaped coil spring encased in opaque cloth, with rubber o-ring gaskets fitted on both ends. One gasket (narrow end) goes on the camera/phone around the lens, the flared end gasket goes onto the glass.


It could be interesting to apply it on existing videos on youtube. It could reveal some perceptually hidden information out of reflections.


Here's the official Siggraph 2015 submission, linking to both the PDF as well as the demonstration video:

https://sites.google.com/site/obstructionfreephotography/



nice work! what are the applications for that besides taking clean pictures behind a glass windows or a fence?

Security agencies applying that algorithms to see if they can obtain some extra information out of the reflected pictures?


Ah, the end of the bathroom-mirror selfies ? :D


This could be a nice feature on a camera, but you'd need a second sensor offset from the first to make that work.


From the article:

> The input to our algorithm is a set of images taken by the user while slightly scanning the scene with a camera/phone [...]

As mrb noticed, this has the side effect that the algorithm removes moving object from the scene, for example cars.


Yes, but users don't normally 'scan' a scene, they point, focus and click. So if you want to have this work without the 'scanning' element you'll need two offset sensors.


This would be a lot like Google's how "artificial lens blur" feature works[1]. I don't think it would be too difficult to teach a user to move their phone a little to scan the scene.

[1] http://googleresearch.blogspot.com/2014/04/lens-blur-in-new-...


I agree, but this is a usual trick. I remember an algorithm to make a "3D" image with a normal camera that expect the user to scan the scene in a similar way to create a deep map. (I can't find the exact link just now.)


RTFA: this is covered in the Fig 1. Description, and the abstract.


examples in the pdf have a CSI zoom-enhance feeling to them... good stuff.


TL;DD (Too Large, Didn't Download)

I did search google for "Obstruction free Photography" and there is one video on youtube that explains the technology and provides examples.

https://www.youtube.com/watch?v=xoyNiatRIh4


The file is 31mb. The video you linked is 75 mb in 720p.


However the video is available in 144p, and starts playing before it's finished loading. The PDF takes (for me) an irritatingly long time to load in chrome, and the images require me to zoom right in to get even close to the same resolution as a fairly low quality video stream.


Well, for me in Chrome, it took 9 seconds, so Chrome has nothing to do with it. In any case, it's a professional paper for professionals to study and download time is not an issue.

But all that is OT anyway.


The PDF took 67s to download. The YT video plays immediately.


PDF.js can stream the file just fine, it'll start displaying the first page before it's finishing loading the rest.


the bottleneck is at the source


Remarkable results, especially the 'removed' artefact turning into another clear picture.


It does have a lot of example photographs in it, but it's only 32Mb.


Look like one side effect is removing anything that move, like the cars in one example in the video...


This seems to be a trend with academic articles. I don't think they have bosses saying "that's great but Page Speed says..."


Enhance. Enhance. Enhance. Just print the damn thing![0]

[0] https://www.youtube.com/watch?v=KiqkclCJsZs




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: