Hacker News new | past | comments | ask | show | jobs | submit login
Earth rotation limits in-body image stabilization to 6.3 stops (2020) (thecentercolumn.com)
166 points by pwnna 4 months ago | hide | past | favorite | 105 comments



Well, if we're nitpicking here, it is not 86,000s/day (24 hours * 3600s/hour) and 7.27x10^-5 radians/s, but 86,164.091s and 7.29x10^-5 radians/s.

24 hours is the time it takes the sun to return to the same spot in the sky due to earth having to rotate for another 3m56s to make up for angle gained by revolving around the sun in the same direction as the rotation of the Earth. This applies for the other planets that also rotate and revolve in the same direction - Mercury, Earth, Mars, Jupiter, Saturn, and Neptune. A sidereal day is 23h 56m 4.091s for distant stars to return to the same spot in the sky.

Damn, I knew that is why I botched my 6-stop exposure at my daughter's graduation! She can't blame me now! Thank you HN!


>Damn, I knew that is why I botched my 6-stop exposure at my daughter's graduation!

how about driving for 6-stop before taking the shot a tank with stabilized gun trained to the target. Now the tank gunner has the excuse too.


I shot competitively in JROTC and after, but never out to 1000 yards (914 m), only 500 yards. The Earth's rotation affects your trajectory significantly particularly if you are shooting longitudinally at a target at a higher or lower latitude. The Coriolis effect. I had more issues with varying winds or being consistent across shots.


You don't need GPS to figure out the correction for this. Inertial navigation systems in aircraft (which use very stabilised platforms with a lot of math involved) worked before GPS was available.

It helps to have a rough indication of the current latitude on startup, but you can also figure it out from the gyro outputs. Just takes longer.

With modern sensors (solid state laser gyroscopes) it has all become a lot smaller so if you really want to you can do this in a camera. It's just probably going to be too expensive for what it brings, because 6+ stops of stabilisation is a lot already.


> You don't need GPS to figure out the correction for this.

Perhaps not, but a lot of cameras already have it for geotagging purposes (EXIF), so why not use it:

* https://en.wikipedia.org/wiki/List_of_cameras_which_provide_...

* https://www.digitalcameraworld.com/buying-guides/best-camera...


Because location doesn't give orientation, which you need to know to get acceleration to know how to correct.

Simply using the accelerometers and gyro directly does give the info you want. GPS is useless.


Aerospace grade laser gyroscopes are incredibly expensive (and bulky), and even then, they still have massive drift after several hours. If you don't have GPS to relocalize precisely at least every day, there is no way you can know the location of the camera on earth for more than a day, even with state of the art aerospace stuff


> they still have massive drift after several hours. If you don't have GPS to relocalize precisely at least every day

I think you may be confusing two concepts: Measurement of true north and latitude via gyro (what the GP is talking about) and inertial navigation systems (which, yes, do drift).

You can measure those two things with just a single-axis gyro and no external references using a technique called "gyro-compassing". In fact, most internal navigation systems use gyro-compassing to directly measure true north and latitude to align the system on initial startup.


Realistically GPS is the answer, but it’s notable that you could also use a simple light sensor combined with accurate clocks to get your position on earth:

https://en.m.wikipedia.org/wiki/Light_level_geolocator


> Recording light levels over time Wonder how much time is needed to determine location.


You just need to determine the time of sunrise and sunset relative to a known location and you get a rough idea of latitude and longitude.


But the location of the camera doesn't matter. You only need to figure out very roughly at which latitude you are to know by how much to compensate for earth rotation. And you can do that with the sensors that you're already using to do the stabilisation. That was my point... no need for GPS.


> worked before GPS was available.

Worked makes it seem like you throw a switch and it just gives you position data. Those units take anywhere from 6 to 10 minutes to align, if you move the platform, it will error out and you must restart the alignment. The current systems take their initial fix from GPS, but the initial systems, the operator had to manually know and then key that information into the unit.

"Worked" with extreme care operated by a qualified professional.


I mean, surely if you are doing something that requires this level of precision, you could just ask the user to input its current known location? I doubt that even if the user misdialed by ten or twenty meters the difference in compensation would matter (or even if the camera was actually moving around).


> Inertial navigation systems in aircraft (which use very stabilised platforms with a lot of math involved) worked before GPS was available.

Inertial measurement units for aircrafts and submarines cost as much as a house in California. Good luck putting those in a phone.


The IMUs that existed on aircraft before the invention of GPS have been superseded by the ones which actually are in your phone, in much the same way and for much the same reason that a $20 Casio F-91w keeps better time than a fancy Rolex that costs more than a house in California: electronics are cheaper and better than mechanical systems.

We have, naturally, also made better IMUs for places where it matters, ones which won't fit in your phone.

The question is therefore not suited to "aircraft grade, yes or no?", it's "how expensive is the cheapest IMU that's good enough for the specific need?" which in this case itself depends on how many stops is desired.


Actually that F91W does not keep very good time.

There pretty excellent if you keep them on a shelf bit if you run around outside in the hot and cold (you know, like people use a watch) they'll deviate quickly. Because they don't have a temperature controlled (or even compensated) oscillator. A real TXCO (basically putting the crystal inside a temp calibrated oven) is not feasible on a watch battery but compensation sure would be.


I picked it not because it's good, but to illustrate the cheapest digital is still better than any analog mechanism that money can buy.


Ah ok,I didn't realize analog watches were that bad.


> A real TXCO (basically putting the crystal inside a temp calibrated oven) is not feasible on a watch battery but compensation sure would be.

Nitpick: You're thinking about OCXO for crystals inside an oven, TCXOs are temperature compensated crystal oscillators


Well, that, and there's no such thing as a "solid state laser gyro". I believe the GP is confusing MEMS solid-state gyros and laser-ring gyros (which can use a solid state laser, but AFAIK aren't ever called "solid state laser gyro").

MEMS gyros have too much bias drift (both on a unit basis due to fab processes and on a temperature basis) to be practically useful here. You can measure the earth's rotation with a MEMS gyro, but you're really at the limit.


Heh, while I get what you're saying... despite being somewhat pedantic... there are in fact MEMS FOGs now too :). https://www.anellophotonics.com/technology


Laser ring gyros are referred to as solid state laser gyros.


> The second solution is much more plausible, but still very difficult. The user would have to be pointing the camera at the subject for long enough such that the drift in their aim at the subject is smaller than the drift from the rotation of the earth. This is also implausible. What is concerning though, is that this second method is one that could work very well to cancel out Earth’s rotation on the CIPA specified stabilization test apparatus.

So, basically dieselgate but for image stabilization


It seems the camera could use optical flow to get a baseline reading and calibrate the inertial frame offsets. They don't need to point accurately for a long time?

Or maybe that is the method they assume for the second solution and they calculated that it's infeasible.


Can somebody ELI5 this to me?

The image with the 2 earths.. that only works if the camera is not also on the ground, but it is? How is the rotation of the object and the camera not identical? Why would it rotate ‘upwards’?

Also, if the issue is relative motion or rotation between camera and object, wouldn’t two sensors, one on the camera and one on the subject be able to solve this, since we can see if their rotations/movement match up or not?


Imagine the camera were floating just above the surface of the Earth, and also that it had perfect image stabilization. This image stabilization would keep the camera always oriented in the same direction. Same direction relative to what? To the rest of the universe. So if it was pointing right at a star, it would continue pointing directly at that star as it went around and around the Earth. From our perspective on the surface, the camera would appear to be flipping over itself as it kept pointing at that star.

Unfortunately, this would be pretty bad for taking a picture of something that was right in front of the camera (relative to the surface of the Earth). You'd be in front of the camera, ready for your picture, and the camera would appear start rotating as it kept that distant star in view.

So with a perfect image stabilizer, this is what the camera is actually trying to do, even when standing on the Earth with a tripod. It actually senses the rotation of the Earth, and tries to cancel it out, just like it would cancel out your hands shaking. But while it's good to cancel out your hands shaking (because that's a motion that's independent of the subject of the photo), it's not good to cancel out the rotation of the Earth (because the subject of the photo is actually moving with you).


> Same direction relative to what? To the rest of the universe

By this logic, the Earth's revolution would also cause similar issue, and even worse. But in reality only the rotation does.

I think at least some part of your explanation does not calculate.


The Earth's revolution around the sun? What makes you think it doesn't? It's just that the effect is 1/365 the size, on the same axis as the rotation.


The same axis of rotation? Pretty sure they're about 23° off.


Alright, close-ish to the same axis.


It does. A perfect gyro will remain on a single axis relative to the rest of the universe while the Earth goes around the sun, and while the galaxies swirl around.

I'll leave it to you to figure out how "perfect" it would need to be, and what the actual error that the stabilizer would need to account for if the gyroscope is accurate enough to detect the motion of the Earth around the sun, compared with the error created by the Earth's rotation.


That makes sense. It doesn’t make sense why you wouldn’t simply correct for that, or why having 2 sensors wouldn’t fix it?


The position of the gyro is attached to the earth surface but its orientation is not. See Foucault's Pendulum.


Thanks! That makes it incredibly obvious


We all want to keep missiles out of the hands of bad people.

Parts to make really good cameras could be taken out and used in missiles, to tell them where to go.

So we now have laws to keep those really good parts out of cameras, for safety. Cameras still work fine, but you need a tripod to get good pictures when it's dark out.



This can be fixed in software:

you can back calculate orientations with high pass filterd gyro data, to rotate the unfiltered gyro date into the current reference frame, then low pass the unfiltered but rotation corrected gyro data to get the earth rotation axis in the current reference frame, then one can estimate the expected rotation that should be ignored.


Solution (2) as written seems to imply that the camera can only use the gyroscope signal while the camera is pointed at the subject, but I cannot see why that is a strong limitation.

In theory, you can take the last N seconds of data from the gyroscope (I assume it is running while the camera is active) to get the overall drift, even if it is tumbling around for a while before being pointed at the subject... Assuming the tumbling has enough periods of time that are correlated with the earth's rotation (e.g. someone carrying it, not pointing it an an aircraft or something moving EW for the window duration that is anticorrelated with the rotation).


That would only work in the case that the camera is fixed on a tripod and has a long period of stable / rigid pointing before the exposure during which to collect this data. This is sometimes the situation in which image stabilization is used. (But if you can be that stable for that long on a tripod, you may not actually need image stabilization.)

By far the more common case for image stabilization is one in which the photographer is hand-holding the camera and may not frame the subject until the moment before the exposure begins. The camera movement will likely be several orders of magnitude (~4 to 7) larger than the drift that you want to measure. A low pass filter will tell you nothing at all.

At a certain point we can just start using guide stars [0].

[0] https://en.wikipedia.org/wiki/Guide_star


> The first isn’t a good solution for many reasons. Don’t have GPS signal? Shooting next to a magnet? Your system won’t work.

These seem trivial to work around. Just store the last known position and use that. It's rare that you'll be without a GPS signal or beside a magnet, and you certainly won't be traveling long distances in those conditions. And since when do magnets block GPS signals?


It’s not that a magnet blocks GPS signals, but it does affect the compass in the context of using 6 of the 9 degrees of freedom in the first proposed solution: “Use the camera’s GPS, accelerometer, and compass to calculate exactly where it is pointed and its latitude. ” (This solution should also do sensor fusion with the gyroscope, not just accelerometer and compass for orientation from a 9DoF system.)


Version 2 sounds to me as the probably reason for the ability of Cameras like the OM1-2 to go over 8 stops. Yes, it is probably not a simple task to measure the earths drift with the gyroscopes, but there is one thing that might help: the frequency of that drift is exactly known - it is the speed of the earths rotation. So it should be possible to tune a very narrow filter to that frequency and only analyze the gyroscope signal for that frequency. With that one could at least partially compensate for the drift.


Nikon claims 8.0 stops of "VR image stabilization" for their Zf camera (released late in 2023).

https://www.nikonusa.com/p/z-f/1761/overview

("Based on CIPA standards; when using the telephoto end of the NIKKOR Z 24-120mm f/4 S" - for clarity, that lens does not have optical VR in the lens itself, so this is all based on in-body stabilization.)


On the other hand, that should be awesome for astrophotography.


The issue is rotation of the sky about the line of sight axis. Whether the exposures are short or long, over time the sky will rotate more than what an in-camera system can compensate for (the amount of rotation depends on location/time/direction). Over these timescales a rotator that can perform larger movements is needed. This can be provided by an equatorial mount or an internal rotator.


I believe that fancy astrophotography tripods already do that rotation for you, right?

I think that for astrophotography, the shutter times are so long that you have to build it into the tripod, instead of relying on the tiny amount of stabilization that can be done in-camera.

Although maybe it would be helpful to cancel out some motor noise of vibrations from the tripod. But probably the existing image stabilization already does this.


Fancy astrophotography tripods--really, the mount--do that rotation for you. That's why they exist. Even fancier ones exist that permit close-to-arbitrary slewing. Those can be used as a go-to-mount where with the right software, it can image wherever in the sky you're pointed, take the current time, and plate solve for where it's pointed, then finally point at whatever target you actually want to shoot.

For the very long exposure times, you can also hook a second camera up and run closed loop control on a specific star to keep your primary image sensor trained on the correct target to even tighter tolerances. There's companies making cameras that combine both the primary and secondary camera into a single housing so you don't need to fit a second camera + lens to your setup, or insert a prism to pick off part of the image to go to a second camera.

Amateur astrophotography today does tricks you needed access to a dedicated lab to do in previous decades. It's amazing!


Pentax cameras take a different approach with stabilization--rather than stabilize inside lens, which means every lens is shipping its own stabilization solution, they stabilize the sensor itself.

It limits stabilization to two axes, but now any lens is essentially stabilized. And it also lets them do some tricks, since it's so integrated. One is to do sub-pixel sensor shifts for higher res photos, and another is to do astrophotography tracking when GPS data is available.

Much more limited in scope than a full tracking gimbal, but not bad considering it's built into the camera (earlier bodies had a GPS attachment that slotted into the hot shoe connector): https://www.lonelyspeck.com/pentax-k-1-mark-ii-astrophotogra...


Most cameras these days--other than the low end and the very high end--have IBIS (In-Body Image Stabilization) built in, and then stabilization built into the longer lenses (typically > 100mm). In higher-end/more recent cameras that both IBIS and lens stabilization can work together to improve how effectively the system works. I don't know if it's true of universally, but the recent cameras in Nikon's ecosystem which I'm familiar with use a 5-axis IBIS unit. A quick search suggests the K-1ii and some other Pentax cameras also moved to 5-axis IBIS--probably one of the reasons most brands are claiming 5+ stops in-body these days.

OM-1, formerly Olympus, does has some very cool tricks using the tiny micro 4/3s sensor combined with a sick IBIS unit allowing hand-held astrophotography that the larger companies haven't bothered with.


This is also how any IS lenses for film cameras work since moving the film around isn't entirely practical.

Nikon: https://www.nikonusa.com/learn-and-explore/c/products-and-in...

You can see the VR lens element there.

The Canon version: https://www.canon-europe.com/pro/infobank/image-stabilisatio...


> I believe that fancy astrophotography tripods already do that rotation for you, right?

There are various types of mounts, and each type can be either basic or fancy. The specific type of mount that deals with rotation of the sky (around the north/south stars):

* https://en.wikipedia.org/wiki/Equatorial_mount

You can can get non-fancy ones (US$ 240):

* https://optcorp.com/collections/equatorial-mounts/products/o...

Or fancy ones ($20K):

* https://optcorp.com/collections/equatorial-mounts/products/a...


It has no bearing. Tracking is how you keep stars from smearing, not stabilization.


Perhaps in some camera firmware bug database there's a closed bug marked: "Won't fix. Tested working in orbit."


This is analogous to astro-photography problems with keeping stars as points rather than as blurred lines in long exposures. If you think about it, if a long exposure at night has a static landscape but moving stars, the IBIS equivalent would have static stars and a moving landscape :)


There are some fairly enjoyable time lapse videos taken in a non rotating frame:

  https://youtu.be/DmwaUBY53YQ
  https://youtu.be/zRTJ5ISmVXE


You should be able to calculate it out by telling the user to press a button and after this, not rotating the camera away.

Right?

Might just not be practical at all.

On the other hand, shouldn't the earth rotate fast enough to figure this out in a short timeframe while the photographer starts looking through the finder?


Yes, basically Method (2) with stable measurement window. Just put the camera down on a stable surface, click button. Let system wait some ms to allow click disturbance to pass, then integrate signal over some fixed time to establish the rotation, then pick up and continue...


Why not stabilize optically?

I am probably missing something huge. But if the goal is a stable image why use gyros. use the image itself to apply the correction factor to the final integration. sort of the same way videos are stabilized.


You can do this two ways: you can take a bunch of images, and align and stack them. Or you can take one motion-blurred image and infer the convolution kernel somehow. The former is undesirable because each frame you take has a fixed amount of "read noise" from the sensor. So you'd change your sensor noise for the worse.

The second way is undesirable because it's really hard. There is a lot of research into this and some of the results are good but some are not.


This would make the resulting image frame smaller. A Nono in the current full frame meta.

Photo hobbyists are snobs :)


i'm curious how the OM-1 MK2 gets around this to achieve 8.5 stops.

https://explore.omsystem.com/us/en/om-1-mark-ii


I still don't quite follow the explanation. The duck and I are on the surface of the same body and are rotating together, maintaining a constant distance... why does Earth rotation need to be corrected for?


In terms of flatland:

Ignore the camera. Instead you have a planet (a circle in flatland), a gyroscope (an arrow that always points in the same direction on the page in flatland), and Mr Square.

        --> [.]
             |
        /----\
        |    |
        \----/
Start off at noon, with Mr Square and the arrow at the top of the planet, the gyroscope to the left of Mr Square pointing at him. Now progress time by 6 hours, by rotating the planet clockwise by 90 degrees. Mr Square and the gyroscope will move with the surface of the planet, resulting in them being on the right side of the circle on the page (the gyroscope above Mr Square on the page). Mr Square's feet will be on the surface of the planet, meaning his rotation matched the planet. However, the gyroscope always points in the same direction on the page. It's now pointing at the sky.

        /----\
        |    | -->
        \----/-[.]
In conclusion: both Mr Square and the gyroscope move with the surface of the planet - in exactly the same way. However, Mr Square will always be standing (along with everything else on the planet), while the gyroscope always points in the same direction on the page (irrespective of the time of day). A camera using the gyroscope would have to account for that.

We wouldn't have the same issue on a (non-rotating) space station. That's why planetary rotation is blamed.


I asked myself how the gyroscope manages to point always to point to the same direction. The answer is that only objects moving translational form an inertial frame, rotating objects don't:

> Due to Earth's rotation, its surface is not an inertial frame of reference. The Coriolis effect can deflect certain forms of motion as seen from Earth, and the centrifugal force will reduce the effective gravity at the equator. Nevertheless, it is a good approximation of an inertial reference frame in many low precision applications.

https://en.wikipedia.org/wiki/Inertial_frame_of_reference


It's about actual gyroscopes (motion sensors), not optical stabilisation. Gyroscopes in cameras are now so good they can pick up earth rotation. Perfect for stabilising image of stars, not so good for stabilising imae of duck translating over those stars. For that you would need optical stabilisation. In-body stabilisation is inertial, not optical.


There was an escape system in the Soyuz rocket that fired if the rocket tilted too much. It was based on gyroscopes.

Once, a launch was aborted just before liftoff. The rocket stayed on the pad and the cosmonauts were sitting in the spacecraft for some time. Suddenly the abort system fired and pulled the capsule from the rocket. They landed safely on parachutes.

It was discovered that earth had rotated and the gyroscope had detected the tilt of the rocket, so it fired the escape system.


I think you’re mixing up different launches: Soyuz 7K-OK No.1 and Soyuz 7K-ST No.16L

Soyuz 7K-OK No.1 was uncrewed and likely had the quirk with the gyros. One person near the launch on the ground was killed.

https://en.wikipedia.org/wiki/Soyuz_7K-OK_No.1

> Initially, it was suspected that the booster had been bumped when the gantry tower was put back in place following the abort and that this somehow managed to trigger the LES, but a more thorough investigation found a different cause. During the attempted launch, the booster switched from external to internal power as it normally would do, which then activated the abort sensing system. The Earth's rotation caused the rate gyros to register an approximately 8° tilt 27 minutes after the aborted liftoff, which the abort sensing system then interpreted as meaning that the booster had deviated from its flight path, and thus it activated the LES. The abort sensing system in the Soyuz was thus redesigned to prevent a recurrence of this unanticipated design flaw. On the other hand, the LES had also worked flawlessly and demonstrated its ability to safely pull cosmonauts from the booster should an emergency arise as it did years later in the Soyuz 7K-ST No.16L abort (26 September 1983).

The emergency condition of the Soyuz 7K-ST No.16L abort was not caused by rotation of the Earth, but by multiple failures that caused damage to the launch vehicle:

https://en.wikipedia.org/wiki/Soyuz_7K-ST_No.16L

> The crew was sitting on the pad awaiting fueling of the Soyuz-U booster to complete prior to liftoff. Approximately 90 seconds before the intended launch, a bad valve caused nitrogen pressurisation gas to enter the RP-1 turbopump of the Blok B strap-on. The pump began spinning up, but with no propellant in it, the speed of rotation quickly exceeded its design limits which caused it to rupture and allow RP-1 to leak out and start a fire which quickly engulfed the base of the launch vehicle. Titov and Strekalov could not see what was happening outside, but they felt unusual vibrations and realized that something was amiss. The launch control team activated the escape system but the control cables had already burned through, and the Soyuz crew could not activate or control the escape system themselves. The backup radio command to fire the LES required 2 independent operators to receive separate commands to do so and each act within 5 seconds, which took several seconds to occur. Then explosive bolts fired to separate the descent module from the service module and the upper launch payload shroud from the lower, the escape system motor fired, dragging the orbital module and descent module, encased within the upper shroud, free of the booster with an acceleration of 14 to 17g (137 to 167 m/s²) for five seconds. According to Titov, "We could feel the booster swaying from side to side. Then there was a sudden vibration and a jerking sensation as the LES activated".


Thanks! This happens so often when going from memory...


No worries! I wasn’t familiar with these launches offhand myself so your post piqued my curiosity leading me to look the history up myself. And you weren’t far off - if the original launch failure hadn’t occurred, it’s entirely likely that the second launch failure would have been lethal and possibly even more other launches besides that one. Safety and reliability is a moving target but is always worth investing in because it also helps drive costs down and reduces failure modes that may not even be detectable or predictable due to black swan events.


It's explained here:

> Your camera, which is using its IBIS system to attempt to keep everything as still as possible, may not realize that you are rotating with your subject and will instead try to zero out any rotation of the camera, including that of the Earth

The problem is that the stabilization system tries to compensate for the rotation of Earth (because it can't make the difference between the rotation of Earth, which shouldn't be compensated for, and the movement of the holder which should be).

So it would work if you were taking a photo of a subject not rotating together with the Earth. Like the stars.


I guess I couldn't quite grok how IBIS would measure the Earth's rotation whilst being on Earth, but as I've now just learned (through various slaps of the forehead) a perfectly vertical spinning gyroscope will definitely tilt with time due to the Earth's rotation and this is measurable to high degrees of precision.


Not just randomly tilt - it will align itself with the poles of a spinning celestial body. It's used in applications that can't rely on correct magnetic variation, like surveying and aircraft inertial navigation systems.


Why does it stop at Earth's rotation? What about revolution around the Sun?


It also does track the revolution of the Earth around the Sun, and that of the Sun around the Milky Way, as well as the various influences over the Milky Way that make it go less than straight on its way towards the Great Attractor.

Those movements just happen to be slow enough that they don't limit image stabilization to 6.3 stops.


I still don't quite get it.

Under what definition the Earth's revolution is "slower" than its rotation?

Why can the camera's stabilization system detect the rotation and correct it (and causes undesirable result) but not the revolution?


The relevant "speed" is the change of the direction you are pointing at. The Earth rotates around itself in 24 hours, but around the Sun in 365 days, so the daily rotation is 365x as fast. We also rotate around the center of the Milky Way every couple of hundreds of millions of years.


Ah, so the angle (orientation?) is what actually matters? It makes sense now.

Thanks!


At least with respect to the influence of the Earths rotation. With respect to compensating actual camera shake, the modern systems correct 5 axis's. 3 for rotation around the 3 space axis's, and 2 translational, which leaves only motion towards or away from the motive uncorrected for (which usually only expresses itself in the need of refocussing, but that usually is far beyond camera shake, except for macro photography).


Per my other comment: Earth isn't attached to the sun, the sun isn't attached to the galactic center (they are orbiting). They are independent rotational frames of reference. They are also gyroscopes in their own right.

As far as taking pictures of other things on Earth, at least. Taking a picture of another planet/star/galaxy would also face similar challenges.


What's the difference between our "attachment" to Earth compared to Earth's attachment to the Sun? Aren't both doing circular motion due to gravity (in us-Earth's case, gravity + support force from the ground) and inertia?


The Earth is in freefall above the sun. We are not in freefall. https://en.wikipedia.org/wiki/Free_fall


It doesn't.


Eventually we will have to compensate for gallactic rotation.


I never thought of that one. It's fun to think "we know the whole universe isn't spinning very fast, because our gyros are stable". Feels both obvious and somehow bigger-than-life to me.


We are not kinetically bound to the galactic center, there is no friction causing earth to remain "upright" in respect to the galaxy. Earth is also a freely rotating inertial body and, even though wobbly, it is itself a gyroscope.

The next level of stabilization would probably be gravitational waves.


Anyone who’s read the short story “The Billiard Ball” by Asimov would have taken it into account.


Let's do a small though experiment: Assume you have fixed your camera and the duck on a surface. Then while taking the photo, you rotate the surface. The motion sensor in the camera tries to cancel out this motion, which is suitable for taking a photo of something that's not fixed on the surface, which means it does not work well for the duck that's moving with the camera.


The opposite: Earth rotation is measured by the camera and can't be easily distinguished from camera rotation relative to earth. So image stabilization will also correct for earth rotation, which is undesirable.


Well if it keeps pointing in the exact same direction then it would stay fixed on whatever star it is currently pointing towards.

Which is normally not a problem, but relative to something on the surface of the Earth the stars do move.

So I guess you should ask people to stand directly in front of Polaris if at all possible.


You should be able to exceed 6.3 stops if you are pointing north/south rather than east/west, right? Maybe they are just measuring it pointing north/south.


Would it be possible to correct for the rotation by counter rotating if the orientation of the camera is known (or determined by GPS + compass)?


Bullshit. It's ITAR, they don't want parts floating around in the world that can make a dead nuts accurate INS - inertial navigation system, as this enables weapons we don't want in the wild.

You can stabilize out everything and account for the rotation by simply watching the vector of gravity over time.


Nikon has 8 stops so they somehow beat physics


6.3 stops is a lot, though. That's basically the fully usable aperture range of a kit zoom lens.


What are these "stops" in this context, for the non-photo nerds ?


A stop generally is a doubling/halving of light intensity at the sensor. For apertures this means a factor of sqrt(2) on the diameter (because the area is what matters), for exposure times a doubling/halving of the time.

"Stops of stabilization" in this specific context refers to a standardized CIPA test which determines a shutter speed where the image remains acceptably sharp. They then calculate the number of stops to 1/focal-length, which is a rule of thumb for getting sharp images from the 1950s. So if a 200mm lens produced a sharp image at 1/10s in the CIPA test, then that would be 1/10 -> 1/20 -> 1/40 -> 1/80 -> 1/160 -> 1/200 about 4.3 "stops of stabilization".

The results from the CIPA test don't really hold up to the real world though once you move beyond ~4 stops.


it's an abstraction of the aperture size and exposition time. If you expose twice as long it gives the same light than an aperture twice the surface. Those 2 are discrete in camera, so it is abstracted as stops. Exposure time is limited by movement, and aperture size is limited by the optics itself. Sensor stabilization allows to gain "stops" by extending the exposition time before the image becoming blurry from the photographer movement, thus allowing as much more light to come


Yes, or considered another way 1/25th shutter vs almost 1/2000th, ie a lot of motion blur vs. virtually nothing will be able to provoke blurring


Except a moving subject, of course.


At 1/2000th both a running cheetah and a running squirrel are completely frozen. I haven’t yet found anything that isn’t frozen with that setting. I suspect at that point you’re in the domain of bullets, very outstretched springs and the like.

Edit: yeah, a speeding bullet caught at 1/5000th: <https://flickr.com/photos/hoohaaphotos/5587502201/>


Stabilization doesn't help with subject movement, it only helps with the camera's shake.

So with this level of stabilization, you'll take a picture of a running cheetah at 1/25 as if it were 1/2000 only as far as the stability of the camera is concerned. So if you're not tracking the cheetah you'll get a sharp background because the shaking of your hands has been nullified, but the cheetah is still moving within the frame and still blurry.


However, it's not the aperture range that matters. Theoretically, if earth were not rotating, then 10 stops would still be useful for long-exposure photography. In other words, the stop differences in stabilization are more useful when you think of then in terms of shutter speed, NOT aperture.


Is a plain phone gyroscope enough to detect Earth rotation? Is there an app for that?


Yet another example of b0rked / unescaped TeX, specifically log vs \log in this case. Blows my mind that nobody sees it...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: