Neat! For games, it’s really because they want to emulate a cinematic look. Some filmmakers use chromatic aberration and other optical phenomena to great effect.
About 15 years ago I participated in a study of wavefront lenses(I think it was by Zeiss). The glasses were single prescription and they were hands down the best glasses I've ever owned. Great vision and low chromatic aberation.
I've never seen the tech offered for single prescription lenses since then. I really wish they continued with the technology.
Probably, but there may be some important difference I'm missing. I did the study quite a while ago, and it was around the time I did another study involving 'programmable' lenses where the prescription was set using a process "similar to how writable DVDs are made", or so they explained it at the time. Those lenses were shit, IIRC.
I really miss those wavefront lenses. I kept them for years before dropping them on a hike. I'd gladly pay extra for them again.
I remember the machine which mapped my eyes only took a minute or less to take my prescription. Nothing like a phoropter, which I feel is subjective and error prone. They also used a camera system to fit the lenses, similar to what LensCrafters used when I got a pair of Rx sunglasses a month or so ago.
Take a look at the iProfilerPlus, it's description is an 'ocular wavefront aberrometer', by Zeiss. I believe https://clearviewvisioncare.com (in Tucson) has one, they are open (I know cuz they called me last week), I bet they can answer your questions.
I've got a smartphone with a not so good camera. Some time ago I heard about an app that could take a picture of a template and then would calculate the distortions of your camera and fix those (with ML).
Does anyone know what app/research this is? I can't imagine why smartphone maker don't use this for their crappy camera's so they can still embed cheap camera's but enhance the picture quality.
This functionality is built into OpenCV[0]. If you're using a reference image (or if you know the lens properties) it doesn't require ML. It's mostly just a matrix transform.
Ah yes! It looks like this is related to what I was looking for. It was a long time ago that I saw a demo about this and I was under the impression that ML was used to calculate the correction matrix.
On the one hand, Lightroom corrects my lens distortion and vignette from a saved profile of that lens; but I’d think at a smartphone level, where the parts are smaller, they’re also less even?
In other words, camera lenses are small so manufacturing defects are more noticeable. Therefore, each smartphone requires a different correction.
> In other words, camera lenses are small so manufacturing defects are more noticeable. Therefore, each smartphone requires a different correction.
Well, yes, and no, and yes. Small camera lenses are probably going to be more evident in certain defects. On the other hand, there are typically less elements in smaller camera lenses, which is nice as the less of those there are, the less the chance of other defects.
Taking a picture like mentioned above could absolutely create a 'per-camera/phone' correction profile. May not be able to correct every type of defect but I could see it being useful for some lenses. I know for Sony cameras, there have been a couple models where a large percentage of the copies produced have -one- soft corner.
Some of the smallest plastic lenses have geometries and optical properties which are impossible to make in glass or in larges plastic pieces. So smaller lenses don't have only disadvantages to them. (Even though I myself personally rather take a pound of glass any day over these things.)
Not 100% equivalent, but from what I’ve heard, this can be an issue when using full frame lenses on a crop sensor DSLR. The sensor gets everything from a smaller part of the lens, so defects that don’t show on a 35mm equivalent sensor can show at APS-C.
It's not too complicated, at least in theory. Typically you decide a lens model ahead of time that has some unknown calibration parameters (eg. focal length, skew, radial/tangential distortion). For a given lens/sensor system, the calibration parameters can be determined ahead of time by taking pictures of a calibration pattern (checkerboard, AprilGrid, etc.). The calibration pattern provides some ground truth distances between multiple points in the image. An off-the-shelf non-linear solver is then used to solve for the unknown calibration parameters.
This usually provides acceptable results for applications like correcting lens distortion, even across multiple instances of the same lens. However, sometimes better accuracy is needed - either because the lens is very cheap and inconsistent between different examples or for applications like SLAM where better camera calibration translates directly into better results. In that case there are techniques like online calibration to tweak the calibration parameters on the fly.
Since the recent (2019) mathematical discovery allowing elimination of spherical aberration[1], I've wondered if lenses will either be simplified, or redesigned for better optical performance.
Sub-µm accuracy is not at all easy for a CNC mill and the surface quality is way to bad for optics.
Commercial aspheres are polished with special CNC grinders, but achieve nowhere near the quality of spherical lenses. Only recently magnetorheological finishing became available which allows getting a bit closer in quality.
In most applications, it makes more sense to stack a few spherical lenses instead of manufacturing a super expensive custom asphere which cannot even compensate for chromatic aberrations.
I wear glasses and recently significantly increased the power on one of the lenses (weirdly, not in the other). I also got larger lenses (so distortion that happens in the corner is increased because the corner is farther from the midpoint than before).
The main issue with the new lenses which I noticed was chromatic aberration. Unfortunately my brain can’t fix this in post though it has gotten used to it a bit. Something that I suppose I already knew was that the lens material would affect how much of this distortion you would see. I thought different refractive indices would give different amounts of dispersion, though maybe I should not have thought that as lower indices would mean thicker lenses so more distance to disperse in which might cancel out the lower disposition. In the end it doesn’t matter what I thought, the Lens material does affect the dispersion.
The thing I was more surprised by was that dispersion wasn’t something that came up when making the choice of lens material. Thinking back though, perhaps the only qualities discussed were “thickness” and cost. I put thickness in quotes as maybe the weight was discussed too. I feel like dispersion is quite an important thing I care about and I wonder why it wasn’t discussed. Perhaps it is hard to explain to people (I guess all the material properties are hard to explain), or perhaps it is difficult if the lenses don’t necessarily get better in every way as cost goes up.
There are no glasses lenses which can correct for their chromatic aberration like camera lenses, presumably because they would be too thick and delicate.
In a more general sense, I’m surprised that there’s so little variety in optometrists (at least in the U.K.). Basically wherever you go you get the same precision of prescription and are then given the opportunity to buy some glasses. Some studies have shown that the accuracy of the prescription you get from the subjective refractometry exam can vary a lot between practices but there is no way to know if a practice will be good or bad. I’m also surprised that there is not really a way to get a more accurate or precise prescription than the standard, as I would think that if such a thing existed it would be able to find customers at least in larger population centres. Maybe it would be impossible to improve on the status quo because of the lack of economies of scale or because glasses won’t be in a consistent enough position or because the rich people who would be expected to pay floor the service opt for eye surgery instead.
This comment is so far mostly unrelated to the OP though so I will make another comment: I wish the article had mentioned some of the physics behind the different aberrations, for example the reason longitudinal aberration is hard to correct in post is that it os an effect where different colours are focused to different distances and it of hard to make a colour less out-of-focus to fix it. The reason a bokeh light circle is a certain shape is that that is the shape of the lens aperture (hence usually a circle or many-gon). The misshapen bokeh effects tend To come from light coming in at more acute angles and from those angles the lens is a different shape.
If you're seeing annoying color effects it could also be the anti-reflective coating that is bad. Or just that you're not used to having it.
There are alternatives to the phoropter exam called autorefractors and wavefront sensors which use lasers to measure your eye. I'm pretty sure the UK must have clinics who use them. Startup companies in the US are trying to make "kiosks" for eye exams based on such technologies, but run into state laws that restricts prescribing lenses to licensed optometrists. Also there are startups trying to scale such technologies for the developing world where they don't have sufficient eyecare so people with bad refractive errors are basically just left blind.
It’s true that these exist but they are rare and not typically advertised. Studies have also shown that one doesn’t necessarily get better outcomes from the kind of objective refractometry that an autorefractor gives (because eyesight is a combination of the optical properties of the eye and things happening in the brain/optical nerves)
Dispersion does come up; the last time I got new lenses for my glasses, one of the brochures on the table had the Abbe number [0] listed for various materials.
Generally lenses with the highest refractive indices have fairly low Abbe numbers and have a lot of dispersion; a lower dispersion lens would have to be thicker and heavier (and thus increase geometric distortion).
There are some fancy extra-low-dispersion glass formulations used in camera lenses, but apparently these are generally more fragile than typical glass and that's not suited for glasses.
I also asked my optician if there were any achromatic doublet glasses, but he told me that those would end up having to be extremely thick and heavy.
Some amount of aberration is inevitable in spectacle lenses, because you're severely constrained in terms of weight and thickness. Most people don't want to wear glass lenses for safety reasons (they're legally prohibited in many jurisdictions), which further constrains the options available. There just aren't many variables to play with.
Smaller lenses will reduce how noticeable abberation is, but if you have a high prescription then I'd strongly recommend going for contact lenses.
As someone who’s been photographing for a few decades now, it looks like the recent iterations of lenses from the big brands are basically “perfect.” 70s and 80s lenses had tradeoffs... with today’s lenses from Canon/Nikon/Sony, those made in the last 5 years or so, the optical quality after digital correction is so high that for any non-extreme form of photography they can be considered perfect. And that goes from lenses like Canon’s 35 1.4 ($1800) down to their 10-18 ($250). The gearheads can natter online but the returns here are now nearly fully diminished.
Many lenses from the 80s/90s hold their own compared to modern designs. We’ve been at the plateau of the S curve for a while. The really exciting stuff is happening in smartphones, where miniaturization is the driving variable - for instance the latest folded optics designs.
I'd say the really exciting stuff happening in smartphones are the things going on with computational photography, like Apple's Deep Fusion. This is moving beyond the limitations of single lenses and single sensors and enabling some pretty impressive things.
I think "The gearheads can natter online" is a bit reductionist.
You say that lenses are essentially perfect for "non-extreme" photography but for most "non-extreme" photography, frankly, a modern smartphone is just as good.
These days, the main practical reason why most people would buy a dedicated camera is so that they can handle those "extreme" situations, so it does make sense to care about how far a lens can be pushed.
For example I shoot a lot of wildlife on Sony's A7RIV and given the sensor's crazy resolution, if I have a lens that can resolve enough fine detail, I can crop to get extra reach. This allows me to takes shots which would otherwise be impossible.
For a less "extreme" case, regarding portraits, you might think people arguing about the quality of bokeh are "nattering" but it can make a big difference to your image. Compare the portrait shots in [0] and look at how different the background becomes.
https://github.com/yoonsikp/kromo
As a side note, isn't it ironic that games now resort to adding aberrations to make it seem more real?