Hacker News new | past | comments | ask | show | jobs | submit login
The New Science of Seeing Around Corners (quantamagazine.org)
93 points by tonybeltramelli on Sept 3, 2018 | hide | past | favorite | 10 comments



I'd love to see some examples of these reconstructed images. Especially ones that come from photos that were taken before this technology was developed. This work reminds me of a technology I read about a few years ago that could determine an actor's heart rate from movie footage. There's so much hidden information in the recordings that we make, if we can figure out how to extract it.


There isn't enough data in normal photographs.

The techniques presented here only work if you set up the environment, or at least know a lot about it. The only case where they could actually reconstruct an object, was an extremely elaborate setup with a very simple scene, and a controlled laser to scan the scene.

Just letting your algorithm run over random photos won't reveal a thing.


> The techniques presented here only work if you set up the environment, or at least know a lot about it.

Sorry which techniques are you referring to? It's probably this one, but this is just one small part of the article

While Freeman, Torralba and their protégés uncover images that have been there all along, elsewhere on the MIT campus, Ramesh Raskar, a TED-talking computer vision scientist who explicitly aims to “change the world,” takes an approach called “active imaging”: He uses expensive, specialized camera-laser systems to create high-resolution images of what’s around corners.

The article starts by describing an observation of very poor quality camera obscuras (the faint image of the outside world you sometimes get through a window) and goes on to talk about many other ways to get information out of images that isn't easily seen. Most of those techniques seem to use two images of the same scene, with some change, to actually get access to that information. That means most old photos cannot be analysed, but old videos can be.

Imagine you’re filming the interior wall of a room through a crack in the window shade. You can’t see much. Suddenly, a person’s arm pops into your field of view. Comparing the intensity of light on the wall when the arm is and isn’t present reveals information about the scene. A set of light rays that strikes the wall in the first video frame is briefly blocked by the arm in the next. By subtracting the data in the second image from that of the first, Freeman said, “you can pull out what was blocked by the arm” — a set of light rays that represents an image of part of the room. “If you allow yourself to look at things that block light, as well as things that let in light,” he said, “then you can expand the repertoire of places where you can find these pinhole-like images.”


This reminds me of the scene from the remarkably prescient original Blade Runner (1982) where Deckard is able to look around corners in a photograph -- he tells the computer to "pull out, track right, track 45 right, enhance,..."[1] and finds a woman that wasn't visible at first.

[1] https://www.imdb.com/title/tt0083658/quotes

EDIT: Rereading the quotes from the script, it seems like Deckard was using the reflection from a mirror to look into another room. However, in the movie it felt like he was going around a corner to get to the mirror.


There is nothing "prescient" about that scene. It's fiction, and it will never become reality. There's no way to capture an image with that much information in it. Noise will always limit you, especially in dimly lit indoor scenes like that in Blade Runner. If you want to do that kind of trickery, you need a big controlled setup -- nothing you can put in a small box like a camera. And it's not a question of miniaturisation, the problem is that there's a physical limit to the resolution you can capture with a device of a given size.


What if the camera was super high resolution and hyper-spectral?


There's parallax in the photo when he pans, unveiling the woman, if memory serves.

It was definitely sci-fi photograph tech.


I wonder if the pin-speck technique could be used to reconstruct a 3d reconstruction of the Chelyanbinsk meteor:

1) There were huge amounts of recordings

2) It generated plenty of clear shadows (and presumable sun-shadows can be recovered as well)

3) For high intensity images, many cameras might decrease shutterspeed frame by frame, so each line is sampling a shorter period, thus a collection of cameras can be considered to perform random oversampling at a higher framerate (as some oscilloscopes do when not in single-shot mode)

4) The recordings are taken from many vantage points in a wide region, potentially allowing a 3d reconstruction of the meteor as it breaks up


First heard this concept mentioned at SIGGRAPH in 2005 as “dual photography”. Here’s the paper:

https://graphics.stanford.edu/papers/dual_photography/


Wasn't there a Ted talk long back where a research team presented a camera that could do this? Creating images by looking around the corner?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: