Hacker News new | past | comments | ask | show | jobs | submit login
Perfect colors, captured with one ultra-thin lens (seas.harvard.edu)
108 points by porsupah on Feb 20, 2015 | hide | past | favorite | 39 comments



This looks somewhat like the concept behind a fresnel zone plate [1] - in general these are highly dispersive (only work at one frequency) so it looks like they have done something to address this. Many (30+) years ago my final year project at university was trying to dynamically synthesise lenses using by electronically manipulating the phase response across a liquid crystal substrate - technology wasn't really there at the time but always thought that it would have been an interesting avenue to follow up on.

[1] http://en.wikipedia.org/wiki/Zone_plate


If this actually works, it's got to be one of the biggest technology advances of the past 20 years.

I'm guessing there's a catch, but I would love to be wrong.


My guess on a catch would be that it'll focus ONLY the wavelengths that you select for. the others might be diffused or given wildly different focal points. This might mean that for certain types of photography you're limited in what you pick up (e.g. if you've lit a scene with mono-chromatic light that's not one of the selected wavelengths).


They did say in the article that they hope to produce devices that work for a range of wavelengths. This would be great because 3 wavelengths would not be sufficient for capturing natural images. We can approximate the cone responses of natural images with 3 wavelengths, which is why RGB encoding and display works for displaying images to humans. But in capture, you want to receive broad spectrum light at the detector, and then encode it to RGB.


Neither the sensor elements in a camera nor the pixels in a monitor are selective for a single wavelength, but rather a spectrum. So we don't actually approximate cone response that way. The fact that we store the channel values as three integers does not change this.


You're right that monitor pixels aren't individual wavelengths, but the spectrum of a reproduced image can be very different than the spectrum of the natural image. My point was just that you don't want your flat lens harshly filtering that natural image by only operating at 3 highly tuned wavelengths -- that's not going to be enough to capture natural images, despite the fact that we can reproduce them with 3 components.


You're right that monitor pixels aren't individual wavelengths, but the spectrum of a reproduced image can be very different than the spectrum of the natural image. My point was just that you don't want your flat lens harshly filtering that natural image by only working one 3 wavelengths.


There's a link to another article at the bottom of this one that hints at other possible uses. http://www.seas.harvard.edu/news/2014/05/collaborative-metas...

What's really interesting about this is that the metasurface lenses might be very useful in other high-precision optics: for example, densely packed multicolor DVDs?

Perhaps even optical imaging of very, very small materials without the need for electron microscopy. Who knows?


I admit that I'm biased toward personal interests and hobbies but my first thought was an Oculus Rift that fits more like a pair of shades than a pair of ski goggles.

And of course the ever-present dream of "smart" contact lenses that can adjust how incoming light is magnified on an adaptive basis or offer AR functionality without any headgear at all.

Obviously that sort of application is years away regardless of how this particular project works out. Still, this stuff does lead to some fun brainstorms and geeky fantasies.


Optical discs are generally using the smallest light they can manage, can colors even help?

Plus they're focusing on a single point, so I don't think chromatic aberration would be much of a problem in the first place.

If they can play fancy tricks to get a smaller focal point relative to wavelength, that would be a powerful improvement. But still probably monochrome discs.


This reminds me of an explanation I read a long time ago for why FM radio is so much clearer than AM. This doesn't really approach your question of informational density, but I thought it was interesting.

If you're driving down the road and someone is shining two lights through the trees. One is white light and merely changes brightness, while the other rotates through three colors. Which light is easier for you to detect changing?


It could be something akin to holography.

To put it simply, they can have the same optical density, but re-write over it several times in other colors. The same lens can observe and read the exact point, interpreting the data in (for example) blue, green, and red.


That is something that's possible, but I doubt its practicality. The longer wavelengths can't store as much data even under ideal conditions, and squeezing them in causes interference that hurts the density you have on the best wavelength.

Plus you would need a much more elaborate disc-pressing mechanism.

Much simpler to add more monochrome layers.


Better images out of cell phones and SLRs, or is that not a logical application of this technology?


It's not just that the images will be better, it's that the sensors can get smaller. The silly bump that represents the camera on the iPhone6/Plus wouldn't be there if this existed. Google Glass could be far cheaper. Smaller SLRs, even! And everything will have far better quality than before. Truly revolutionary.


Id rather say that this means the sensors can get bigger for compact cameras and phones. Currently the only thing that limits us making pocketable full frame cameras with zoomable optics is the size of optics, not the size of sensor or other electrical components. Flat optics might make full frame camera phones even possible.


I doubt this is a big advantage because you would capture everything that is close to the sensor (dust, fingers, scratches etc.). A lens effectively applies an extreme low-pass filter to these things because they are massively out of focus.


From my own experience I can say it would be an enormous advantage when shooting in low light. At least for current sensors. Where I am living, for four months in winter have only limited hours of dim daylight and current phone sensors are pretty much useless. You are right about dust though and extra care has to be taken, but it seems as a small inconvenience to me compared to sub-acceptable noise levels.


Actually, the bump on the iPhone 6 shouldn't be there now. There are phones that are just as slim and with just as big if not bigger sensors, and are perfectly flat. The problem is Apple decided not to change the sensor this year, and used an pretty old one, which hasn't gotten anymore compact, even though the phone's body has.

Watch Apple introduce a 13 or 16MP camera this year that has no bump. Why? Because this time it will actually use a cutting edge sensor, rather than a 3 year old one.


We may also have super tiny lens free cameras that avoid these issues entirely: www.technologyreview.com/news/525731/lens-free-camera-sees-things-differently/


Part of the bump must be there because you need a finite focal length system, and that won't change even if the lens itself is flat.


Do you mean lenses can get smaller, or is there some way this means the sensors can be smaller too?


Sure, but also telescopes could be a lot smaller. The article mentions microscopes too.


This should only apply to refracting telescopes, and not reflecting telescopes I think.


Chromatic aberration is one reason why you don't see many large/professional telescopes use lenses. Another is that enormous, thick, heavy lenses aren't very practical. This seems to solve both problems.


There are a lot of reasons telescopes use mirrors over lenses, only some of which this resolves. Also, will Capasso's work scale to the meters wide telescopes that you want so as to collect a lot of light?


Telescopes also require high optical efficiency, because the entire point is to capture as many photons as possible. It seems likely that any application that uses diffraction has higher-order effects that send part of the light in unwanted directions, even if there is a distinct focus.


I'd imagine (without knowing anything about it) that it would also have applications in fiber optics as well?


Forget cameras, I want the bionic eyes we were promised. I hope this gets us a step closer.


The key part is that it has different refraction for specific wavelengths. This implies that you will still need filters to restrict the light to those wavelengths or you will still have chromatic aberration. Projecting light through such a lens wouldn't have quite the same problem as the source light could already be tuned.

In short, good for projection, but maybe still bad for capture.


"Light shining on it bends instantaneously, rather than gradually, while passing through." I don't think light bends gradually going through a normal lens. It bends at the surface.


They were being a bit imprecise with the terms. The important part is that refractive index is dependent on wavelength and so you get dispersion, which increases with the distance between surfaces. But in a regular lens, the distance is necessary in order to achieve overall deflection of the light beam's initial angle, which results in focusing. A flat piece of glass deflects light internally, but upon exiting, ever beam is parallel to how it entered, resulting in no focusing.

Being able to directly deflect at a flat, thin plane eliminates the need to consider dispersion. Well, as long as you can deflect all wavelengths at the same angle. They talk about demonstrating 3 wavelengths and seeking to extend this to more.


I don't think that makes sense; otherwise, why would the thickness of the lens matter at all?


It doesn't. Light can bend when it encounters a change in the index of refraction, e.g. air to glass or vice-versa. https://en.wikipedia.org/wiki/Refraction#Explanation Light travels in a straight line after it has entered the glass, and again after it exits. Only the angle and the difference in refractive index matters. That's why you can build a Fresnel lens which is, so to speak, all surface. https://en.wikipedia.org/wiki/Fresnel_lens#Description


that's really interesting. apparently I need to read up


The shape of the lens puts the surface of the lens at the correct angle. You can make a thin lens by splitting it into many discontinuous parts, this is how a Fresnel lens works. See e.g. these two pictures:

http://upload.wikimedia.org/wikipedia/commons/9/97/Lens3b.sv... http://en.wikipedia.org/wiki/File:Fresnel_lens.svg


OP above is correct. The light will only bend at the interfaces between two mediums (air, glass).


so, no chromatic aberration. no lens flare either?


It sounds like an Apple marketer devised the article's title ... ;-)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: